Ben,
Following the Discussion Group, I have learned that the GD ratings are held in low regard by many (most?) members of GCA. As a newbie, I am eager to understand why that is.
GD uses eight criteria:
--Shot values (do holes present a variety of risks and rewards and test accuracy, length, and finesse without overemphasizing one over the other two?)
--Playability (does course challenge low handicap players while providing options for high handicappers?)
--Design variety (how varied are the holes in lengths, direction, configuration, hazard placements, green shapes and contours?)
--Memorability (how distinctive are individual holes?)
--Conditioning (how firm, fast, and rolling are the fairways; are greens firm yet receptive and put true)
--Aesthetics (does course take advantage of scenery to add pleasure to a round?)
--Ambience (does the atmosphere reflect and enhance traditional values of the game?)
--Resistance to scoring (is the course difficult, but fair, for scratch golfer?)
I put "resistance to scoring" last since it is clear to me that many GCAers find it objectionable as a criterion. I agree. I much prefer "playability" (challenge low handicappers while providing options for high handicappers).
"Ambience" also strikes me as squishy and having nothing to do with the quality of the design.
The other criteria seem legitimate to me -- and consistent with the values often expressed on this site. Am I correct about that?
A separate concern, for me, about the GD ratings is the number of panelists. With such a large number, including significant increases in recent years, consistency in how the criteria are applied is inevitably an issue.
As for the question that initiated this thread, I would think that GD's decision to require courses to have more evaluations might have several distinct effects:
--More evaluations, all other things equal, add credibility since any given outlier evaluation (high or low) has less weight.
--But more evaluations requires more evaluators, raising the consistency problem mentioned above.
--And requiring more evaluations may mean that more courses drop out because hosting so many evaluators is too burdensome.
A separate issue is how GD presents the evaluations, ranking courses from 1 to 200. The scoring differences are often minute. I might be more inclined to present the list differently -- for example, grouping courses that are within a certain scoring range as ties. I haven't thought through how to do that, but there might be a way to more accurately represent how courses compare.
Even with these issues, I think GD's ratings are a good thing. I look forward to seeing them...even when I may not agree. Discussing why I don't agree is part of the fun, and it makes me think about what I value in golf course design.
That said, I have all of Tom Doak's Confidential Guides. Tom, and company, present a different take, one that I like. When deciding whether to play a course, I put more weight on the CG evaluation than GD's rating.