Andy:
Hello -- yes it's arbitrary - so is any number system one uses. I don't see how a collegiate football rating system works well. It was self-created by Digest and then all the other pubs followed the pied-piper approach so that every year or two you can create "news" that such and such course fell one position or another moved into the top spot -- see the fanfare that the ascension of ANGC to the top spot caused. In regards to the system you mentioned -- you can very well have them assigned a certain letter or number grade and go from there. But frankly the concept of "groupings" -- whether by an assigned number or a heading of say ten courses -- works better than having such a silly and preposterous notion that there is only one #1 course in the land.
I like the groupings because at some point there will be a cut-off -- the original Digest approach worked well in my mind.
Andy, let me state again aggregate ratings are meaningless -- they simply push numbers together and then ipso facto like some sort of cheap magician's trick we get the RESULT. There is no ryhme, reason or detailed analysis - it's just throw courses into the air -- have people vote without any meaningful wherewithal to cross compare from a similar pool of courses played. For example, if person A plays Oakmoint and person B plays Merion and neither has played both -- you have to assume that these respective people can apply the numbers in some sort of consistent fashion. That won't be an issue for the top top courses -- but it does becomes more of a problem the further from the top you slide down.
We do agree a well-researched listing will likely contain many fine courses but there are few raters who have the wherewithal to see the totality -- they often can only approach the process from a limited side of things. That's what made Doak's CG book so fascinating -- a clear and consistent analysis - albeit from his perspective -- but one that was well thought out and not polluted with the aggregate style that is nothing more than a hodge podge of this and that.
Matt,
I think you are right that the "Tiered" approach is more meaningful and realistic, but I wonder if "Tens" is really just the same problem (as Tom Doak mentioned earlier). In reality, the tiers may need to be a little wider (such as Top 10, then 11-30, Then 40-80 and getting wider). Like you said, is there really a difference between a #81 and #115 course?
Really, isn't that what Doak's Guide did? The measure of quality is probably more of a "Bell Curve" rather than a linear, relative progression. As we moved down from 9,8,7 - the numbers is each class grew.
But what you (and I) also liked about Doak's Guide was that he was the constant factor, rather than relying on many individuals having the exact same relationship with a proscribed scale.
if we are going to have ratings systems that involve multiple raters, I like the "head-to-head" methodology that Anthony Fowler was using for his "Re-Rating the GCA Top 100." To some it extent, it shifts the "constant" factor back to the Individual Rater rather than a set "numerical scale" (but still provides some "guidance" as to what things to consider).
Ultimately, there is no perfect answer / solution in a purely quantitative exercise. Using Wider "Tiers" as I mentioned earlier could eliminate some of the obsession over "linear" rankings ("Woohoo - we moved from #52 to #41") and the "head-to-head" feature smooths out "inflationary grading."
At the end of the day, I'll take the rankings with a grain of salt, because I've walked of Highly Ranked courses going "really?" and played Unranked Courses that I would play 10 times out of 10 given the option. There's no replacement for qualitative discussion and comments which explain the rater's feelings, which is why I'd probably just turn to people here for suggestions / thoughts.