I have had a number of discussions recently about reviewing products. As a writer the majority of these have involved the issue of fake reviews, both to inflate and deflate the standing of a work. There are plenty of articles on the rights and wrongs of reviewing with specific intent, and on ways to filter out stooges. However, there is a greater issue that seems to be receiving less attention: lack of a common baseline.
I review books on this blog and cross-post my reviews to Goodreads; I also – where possible – post the review on the website from which I obtained it. Posting a written review is usually a simple enough process. However many also use a bespoke star rating system: some use four stars, some five; some use the mid-point as acceptable, some devote more stars to positive ratings, some merely have stars without labels. This raises the first issue: a Smashwords review gives a book four stars, an Amazon review gives it three; did one reviewer consider it worse or are the scales not comparable?
Even if there is a common scale, there are usually no criteria for scoring. As an example, I started reading Jack Kerouac’s Desolation Angels but put it aside; I could see the literary skill that gained it critical acclaim but did not enjoy the story. Do I give it a high rating because I found it technically sound, or a low rating because I did not enjoy it? By adding a written review I can explain my reasons; however, I still need to choose which will take precedence so my ratings are consistent.
Some sites, such as Amazon, even strengthen this dichotomy. As well as letting those reading the review know my approximate opinion the star rating influences recommendation so, if I use them, I need to rate the book for the purpose I use it: technical skill for reference texts, pure enjoyment for relaxing fiction, a blend of the two for classics. This not only renders the scale useless for comparing my opinion with others but also renders it less than ideal for comparing between my own reviews.
Of course the alternative, each rating system being replaced with two (or three, or more) ratings for different criteria, increases both user effort and system fragility for a benefit that might well only be appreciated by a few; a clear example of a Could Have feature. This is also assuming that all users will complete the scales in the same way; if I really enjoy a book (five stars) then surely I am reasonable in thinking it also technically skilled; or do we require users of a site to show a level of specialist knowledge before they can apply these additional ratings?
I have yet to come to a better answer than being aware of the possibility of incompatible assumptions about what a rating means. This is why I currently do not have star ratings for the reviews on this blog, and use star ratings on those sites that proffer recommendations as a measure of how well they fitted my primary reason for obtaining them.
Do you on ignore ratings by lay people? Do you have a method of making the comparison more accurate?
Kindle and Nook readers protest high e-book pricing with angry one-star reviews (reviews.cnet.com)
Stars in my eyes: A question (readfulthingsblog.com)
Book Review star meanings (wizardstower.co.uk)