Mike,
I am not suggesting that Mr. Doak get out there and start rating courses. I dont want to go out on a limb, but he might achieve some success if he instead sticks to his current day job.
What I am suggesting is that the magazines consider the characteristics of useful books like Doaks.
David;
My point is that any course rating system is only going to be as good as the quality, knowledge, and experience of the raters involved.
I disagree, but in the opposite direction: I dont think that the magazine ratings are nearly as good as the quality, knowledge, and experience of the raters involved.
A problem as I see it is that the current systems lead to 'least common denominator rating.' The best raters may have a unique perspective and an ability to find greatness where others dont. But the rest dont. So you have swings of as many as 5 or six points, to use your description. And the low ratings more than cancel out the high ratings of the better raters. In contrast, courses which are decent, unobjectionable ,and within the status quo will probably end up with a tighter spread of decent scores, none of which significantly pull down the average.
Another way to look at it is by reference to the old computer addage: Garbage in, garbage out.
Think about it. The raters have a whole spectrum of views on what makes a golf course good. Sure they are given criteria, but at least in GW's system they can throw all that out when assigning the final score. Plus, even within the criteria, there is plenty of room for subjectivity.
For example, lets say that Rater A and Rater B both rate course X. Rater A knocks the course because there are very few spots on the course where the golfer has an even lie. Rater B might rate the course highly for the exact same reason. Two raters reaching polar opposite conclusions based on the exact same data. Same thing could happen with wide fairways, penal bunkers, undulating greens, fast greens, trees, water, bunker tyle etc. Rater A could give a low rating and Rater B a high rating, despite that they might completely agree on the non-subjective characteristics of the raw data.
So then the magazines average these scores with the others . . . But what is their justification for so doing? For an average to mean anything, dont we need like data within the sample? Rater A and B have entirely different valuation criteria, so arent we trying to average two numbers which shouldnt even be on the same scale? Arent the ratings necessarily comparing apples and oranges? Dont we need a tighter grouping of criteria before we can start averaging.
One may agree with Rater A or Rater B, loving it or hating it. But it makes no sense to assume the course is average and middle of the road, just because the strongly polarized views cancel each other out. Averaging renders both ratings meaningless.
A real world example, only slightly modified: A local website invites readers to rate the playing conditions at various courses. Softness/firmness of the greens is one of their categories, but they dont specify whether soft/firm greens are good or bad. So two readers could submit dichotomous scores based on the exact same fact. And the ratings are rendered meaningless. What good are such ratings? (In reality, the website likes soft and dislikes hard. Still useless to me.)
That's why, when we were going back and forth in the interminable thread on ratings, I asked people to displace 20% of the Top 50 of the modern and classic listings.
I thought this is a fair challenge, but I doubt there are many non-raters who have played enough of these courses to do what you suggest. For example, I've played less than 10 of these courses (8 I think) so it would be difficult for me to displace 20.
But even with these 8 there are serious flaws.
A few examples: Pumpkin Ridge Ghost a top 100 course? How can this be? What separates this course from the pack? What stands out? Sure it is generally inoffensive and unobjectionable, but what is there of true quality? I played it three times and hardly remember a hole. I dont even think it as good a course as that public course in Portland. What is it, Blue Heron or something?
Bandon Dunes the third best modern course in the country?
Third best? Yikes.
And Friar's Head eight places behind it? Does anyone who has played both of these courses really think that Bandon Dunes is a better course than Friar's Head? Inconceivable. I've looked at GW's criteria and cant imagine giving Bandon a higher or equal score on a single one (I'm not considering conditioning, because as written I think it is stupid and have no idea what to do with it.)
Manele Bay a top 100 course? Dont get me wrong, I've enjoyed many rounds at Manele Bay. It has a couple spectacular tee shots (different directions from the same tee) and great views throughout, but it is truly a course which is solely dependant on Ocean location for its recognition. The course is all holes running on the same axis, terraced up a hillside. There is some width and potential strategy, but this is neutralized by the built up greens designed to catch balls from anywhere. Generally not all that impressive once you get away from the Ocean.
Even though this isnt 20% of the entire list, it isnt really a great performance considering how few of these courses I know.