The major problem that I see in how GD does its ratings is that they DON'T trust the raters or their own definitions. If they did, WHY do they have to make ANY ADJUSTMENTS whatsoever to the numbers sent in?
WHY does there have to be any adjustments based upon a statistical aberration? They SUPPOSEDLY choose the raters with great care; why not then believe when Jo Jo Smithy gives Pinehurst #2 a rating of 3's & 4's across the board that he actually is convinced that this is what it deserves? If you can't trust your judges then the judgment they yield is irrelevent.
By throwing out the aberrent scores, two things happen. The first is that those that score on AVERAGE in a smaller range than others that have a wider margin from high-to-low will have a higher adjusted score. Yet the course(s) that have more higher scores offset by more lower scores would seem to be more impressive to raters.
The second thing it does is that it completely undermines the attempt to rate courses as it introduces the worst of all biases; a Golf Digest bias. Consider, many "experts" believe that "brown, firm & faster" is akin to a better course. Yet WHO decided that that is correct? WHY can't a rater DISAGREE with that premise and have his ratings reflect that? Knowning that a disagreement this way will lead to their ratings being ignored and gotten rid of will only lead to raters who no longer judge what they see but send in scores that they believe GD wants them to.
We have this discussion board we all enjoy primarily because it gives us a place to discuss and sometimes argue why each other is incorrect in their opinions. If the requirements for GCA.com was that each participant must judge and believe certain things about golf courses, wouldn't it be a pretty dull place?
No, it is most important that those chosen to judge be trsuted to do so and that whatever their input and numbers are, that they be accepted. Otherwise those judging have NO CREDIBILITY and everything that GD hopes to achieve is simply wasted effort.
Another example of how GD shows bias and lack of trust in their raters. If the criteria of "Shot Values" is so much more important than the others that
the number presented should be doubled, why keep it scored on a 1-10 basis? Why not change it to a 1-20 basis?
Doing so allows a much more accurate rating. How so? Suppose a rater looks at Bethpage Black and rates the conditioning as about 8.4. He must then give it an 8. Doubled that makes it a 16. Yet if he was able to give the true score as 8.4 doubled becomes 16.8, he would by necessity give it a 17.
That extra point is much closer to what the rater believed the course deserved and is far more accurate. Just imagine how many overall rankings would change by an extra point being averaged in. After all, isn't the announced numerical rating carried out to the HUNDREDTH'S decimal place? Look at the rankings and you'll see that it is filled with differences between courses of several or even a single hundredth of a point.
Finally, what is interesting to me, is how many solutions proposed show this same lack of trust; so how can they be valued. A class of national raters whose views are looked at as more accurate is a falacy on its face. As raters must pay their own way, the first and primary qualification for being one can't be knowledge but would have to be can you afford to do it? That is the glaring weakness of that idea.
The real cure to this, in my opinion, is to trust the raters and their ratings. NOT as being correct or to be agreed with, but as having given an honest and honorable opinion. After all, isn't that really what is at the heart of this game we all love and played over the courses being ranked?
Time to step down from the soap box...