As some here know, when I was writing for GOLF Magazine in 1983, I helped put together the first ranking of golf courses that actually put all of the courses in order from 1-100, instead of just putting them in groups of ten.
It wasn't really a deliberate decision on my part. We were just the first to send out a ballot and have all the panelists grade all of the courses they had seen, which provided numbers to back up the rankings. [I did not even realize that GOLF DIGEST had never done this to that point, until right after the ranking came out.] When I had about half the ballots back, I sent an update to the editor George Peper of how things stood, and he got excited: "You mean we can have a #1 course instead of just a top ten?"
[Oddly, that first time, with a small data sample of only 50 or 60 raters, two courses finished in a flat tie for the #1 position, and George broke the tie in favor of Muirfield over Pebble Beach by excluding his own vote, which was higher for Pebble. I wonder how many current editors of rankings would break the tie by going with the course they liked less?]
Jon Cummings and others with a background in math and science frequently argue that you shouldn't have rankings so precise, because raw scores of a 7 or 8 cannot be parsed fairly at four decimal places [the "significant digits rule" in math]. I suppose that's true, even though all of this stuff is just opinion not scientific "data". But if you did so you would be listing 19 courses tied for 46th place, 7 courses tied for 65th, etc., and that would look pretty stupid. Likewise, if you argue the data is only good enough to break down a top ten and a second ten, you would be letting the third or fourth decimal place separate 10th from 11th, and making a more significant judgment that the #11 course is "second ten". That last point is why I recommended to George that we might as well do the whole list in order. The data is not specific enough to tell us a course should be 11th, but it does strongly suggest it's closer to 10th than to 20th.
But, from the beginning, I understood as well as anyone that there was often no more than an eyelash separating 25th and 31st places, and anyone who put too much stock in a course falling six places on a ranking did not understand the nature of rankings. Unfortunately, the list of people who do not understand this now seems to include everyone from editors to readers to green committee chairmen. And there are lots and lots of clubs out there spending $$$ doing work to try to improve their ranking, without any understanding of just how capricious the rankings are.
One of the ironies of such rankings is that the longer and more deeply you are involved with them, the less seriously you will take them, because you start to understand you're just splitting hairs. But kudos to the new editors of all of these rankings, including Ran and Derek Duncan and Chris Bertram, for at least trying to make them better while they still believe it's possible.