News:

Welcome to the Golf Club Atlas Discussion Group!

Each user is approved by the Golf Club Atlas editorial staff. For any new inquiries, please contact us.


Matt_Ward

I always frown upon consensus formulas because invariably you get the same "name chaser" people providing prop numbers to the same tired die hards. Digest is not alone in this -- you can see it with GW.

Hordes of so-called "expert raters" don't really do much for me. What's becoming painfully apparent but beyond all this mathematical wizards who embrace various rating categories and rating numbers the end result fails to provide something of quality.

Frankly, I would much rather see the mindset of one person or a really paired down grouping of people. No doubt that would place a tremendous need for these folks to travel to see what is happening.

Years ago I used to really look forward in seeing what pubs like Digest would say. Not any longer. It's time for a major overhaul because so much of what is truly present here in the States is not being considered.

My father was fond in saying the following ... "You can work with ignorance -- you just can't help stupidity." I leave it others to decide where this latest version of America's best falls.

Rob Rigg

  • Total Karma: 0
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #1 on: March 31, 2009, 01:54:41 AM »
Are there too many courses in America to rate?

At least too many decent to excellent courses?

Maybe the magazines should just give up - or they should be sliced into more categories.

At least with a "Classic" and "Modern" there can be movement in the modern category. Fewer raters may be able to better focus reviewing on just one subset as well.

I can never understand how so many older parkland courses remain on these lists. Nothing against Sahalee or Eugene CC but please, how can you think that either should be higher ranked than Ballyneal or Friars Head or even BGC?

There is no meritocracy in most of the rankings. The magazines either need to bring in some, fresh, but educated blood, to complement the "established" raters or rating system or whatever they use - or they should just stop with the ranking BS and figure out another way to sell magazines.

Matt_Ward

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #2 on: March 31, 2009, 10:17:17 AM »
Rob:

Let me address each question you posed.

It doesn't take much research to get a good beat -- likely any sleuth -- to examine the buzz that's generated and discern where the "new" hot courses are. The Internet has opened up a wider range of communication points that in years past.

I know I have various key contacts scattered around the country -- some are panelists still and quite a few others are not.

Candidly, I find that too many of the "raters" are there just for the access dimension. They really don't know what it is they are rating and despite the best of intentions at "orientating" them it often means such "teachings" are meant to influence them to favor a particular style over others. You can see this with the Golfweek rating in which certain designers are feasted for a style while others of a different sort get less mention. Digest did the same thing with its latest top 100 parade of courses.

I don't think "giving up" is the right avenue but a clear desire to reassess what's being provided is in order. There will never be the 100% foolproof right listing. But the capacity of raters must go beyond just simply elevating "X" number of courses from the same design style that you see over and over again.

Rob, I agree with you on the tree-infested layouts that get mentioned. I never understood the fanfare tied to places like Sahalee and Eugene. Anyone from those area can see how some of the newer layouts that have come onto the scene recently are superior in a variety of ways.

I have tuned out what the magazines say because a consensus driven formula invariably means compromises of a number of types. I'd much rather get a listing from one person who is knowledgeable because the thought process is likely to be much more penetrating and insightful.

Sean Leary

  • Total Karma: 0
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #3 on: March 31, 2009, 10:31:38 AM »
I always frown upon consensus formulas because invariably you get the same "name chaser" people providing prop numbers to the same tired die hards. Digest is not alone in this -- you can see it with GW.

Hordes of so-called "expert raters" don't really do much for me. What's becoming painfully apparent but beyond all this mathematical wizards who embrace various rating categories and rating numbers the end result fails to provide something of quality.

Frankly, I would much rather see the mindset of one person or a really paired down grouping of people. No doubt that would place a tremendous need for these folks to travel to see what is happening.

Years ago I used to really look forward in seeing what pubs like Digest would say. Not any longer. It's time for a major overhaul because so much of what is truly present here in the States is not being considered.

My father was fond in saying the following ... "You can work with ignorance -- you just can't help stupidity." I leave it others to decide where this latest version of America's best falls.

Matt,

Wouldn't Golf Magazine's model work this way? The panel is there has each seen the great majority of the great courses, so that is the one that I always have felt works best. Not perfect, but it seems to be the best way to me..

Matt_Ward

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #4 on: March 31, 2009, 10:43:19 AM »
Sean:

I see the Golf Magazine model as being more of the "well connected" types who tend to play the traditional layouts but won't venture off the well worn path to play some of the lesser known or hyped layouts.

Sean, I am believer that quality design is now taking root in a range of places. I have been an advovate of the mountain time zone and there are plenty of quality layouts equal to anything you see elsewhere.

The problem with panels is it often falls under the "club" mentality. You get someone who simply invites into the "club" those who generally follow the dictum of the person organizing it. Hence -- the outcome of such "findings" can be rather predictable.

Sean, at the end of the day there are no "perfect" systems -- I'd much rather get the thoughts of an individual which is free of consensus contamination and see how much elasticity they have within their own tastes.

Jeff_Brauer

  • Total Karma: 4
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #5 on: March 31, 2009, 10:53:43 AM »
Matt,

You already know my opinion, but I don't think a small group of gca "intelligentsia" telling us all what is good is any better than a larger group with a wide cross section telling us what is good.  Each has their inherent bias and I don't see anyway to get around it, unless we all just agree that those small groups rating are simply smarter than we are. 

I don't really know what bias exists on existing panels and I do appreciate the fact that you want to get out and see the new stuff rather than rely on the same old lists.  But the biggest debates seem to come when new courses replace old favorites, so again, your list would generate the same amount of debate.

I know you will answer point by point but there is no need to do so on my account. We simply disagree.  I like to think the majority rules that this country is founded on isn't all that bad an idea, and believe wide consensus is the way to go philosophically, even if we don't like the results, much like in our elections.  And yes, the two party system often produces middle of the road laws that satisfy no one, much like these rankings.

But, in the end, rankings just aren't all that important in the big scheme of things.  If things like terrorism and the current economy don't convince you, what will?
Jeff Brauer, ASGCA Director of Outreach

Matt_Ward

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #6 on: March 31, 2009, 11:12:53 AM »
Jeff:

You're absolutely right Jeff -- let's simply rely on the "man on the street" thinking for determining the top designs here in the States. That seems to be working really well. ::)

Jeff, it's so predictable to label the few that have a real good sense of what design is about with the condescending tag of "intelligentsia." Maybe you might consider the fact that certain raters view McDonald's a cuisine product when it's nothing more than fast food. That's what you get with huge consensus driven outcomes. The Zagat guides are wonderful for that all-democratic approach even if the people providing the numbers are truly clueless on the subject at-hand.

The sad reality Jeff is that while the desire to engage the "masses" in some sort of info poll sounds very democratic the net result is that way too many people think they have an understanding of the design merits when really few do. That comes off to you as being elitist. Baloney. It's caled being informed.

You see Jeff I do believe there are smarter people than me in a range of categories. I don't run NASA for that reason. Ditto in other areas. However, you do get people who because they hit a golf ball are then thrusted into the evaluation realm and ipso facto you get an opinion. Whether it's informed one or not matters far less, if at all.

Jeff, raters need to be more akin to Lewis & Clark types -- those wanting to expand their universe and not simply mailing in the same old tired results. I see a place like Baltusrol and while I salute its long standing contributions to the game I don't see the Lower Course as being one of the premier golf designs in 2009. Far too much time spent by raters is the history and tradition which doesn't have squat to do with the qualities of the design and whether such designs have been able to sustain themselves given the levels of competiition.

Let me point out that ANGC's elevation to #1 by Digest is a joke. The place threw under the bus it's core elements and unfotunately when raters go there they are brainwahsed with all the history and fanfare and therefore are seduced.

Jeff, I never said you are going to get a "perfect" list. No doubt when new courses take root over old -- those championing the old timers will cry foul.

However, few people really travel across the nation to get a real handle on what is happening. You get people who tout their own neck of the woods and often times you can see the rigid results that ensue. My neck of the woods in the northeast is a good example of what I am writing about.

Quality design has certainly taken root in plenty of places throughout the USA and some of it has come from lesser names in the design world.

Jeff, you fall back on the erroneous idea that "majority" rule counts more than having people who are well versed in just what it is they are reviewing. Give people a bag of clubs and all you can is that they play golf --they are not necessarily golfers. So it is with raters -- they may place numbers on a piece of paper but frankly what they really see and UNDERSTAND is two entirely different matters.

So be it ...

Lou_Duran

  • Total Karma: -2
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #7 on: March 31, 2009, 11:18:26 AM »
Matt,

I enjoy the lists regardless of the aforementioned problems (some which are probably offsetting).  Jon Cummings makes some good points on the other GD thread regarding statistical methods, but I still always look forward to the rating editions.  If nothing else, it gives everyone the opportunity to rehash the issues and entertain a few new ones.  With changing preferences as well as frequent renovations, there should be some movement in the lists.

Though I am not sure it would be an improvement over the verdict of the general panels, I wonder if tallying the ballots of a select subset of the more experienced, well-traveled panelists (say those who have played 50% or more of the courses on the list) would yield a more "accurate" ranking.  I've been told that one or more of the magazines has/have a small group of "super raters" who already "advise" on occasion, so such a summary shouldn't be difficult to compile.  If nothing else, for internal purposes and quality control, something like this might be revealing.

As to Sahalee and Eugene, I liked the former better than the latter, but don't have an issue as to their placement on the lists.  After the top 40 or 50, the next 150 or so are probably not separated by much.  I went through a very time consuming iterative process on the bottom half of the list to assign a numerical value, and was generally left unsatisfied.  I assumed that large numbers took care of the problem.   
« Last Edit: March 31, 2009, 11:20:55 AM by Lou_Duran »

Sean_A

  • Total Karma: 2
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #8 on: March 31, 2009, 11:23:43 AM »
Matt

I think the current major publications' systems work just fine for what they are intended to accomplish.  It doesn't matter much one way or the other.  I know you are against all of these systems and want a Czarist regime.  Why don't you just list your top 100 (like your Metro 50) and be done with it.  That thread seemed to work well and many people enjoyed it.  Crack on my man.

Ciao
New plays planned for 2025: Machrihanish Dunes, Dunaverty and Carradale

Matt_Ward

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #9 on: March 31, 2009, 11:40:41 AM »
Sean:

I'm not touting some sort of Czar-ist regime but mass uninformed opinions really serves no point. Yes, I did a metro top 50 and it did generate plenty of thoughts. I like to hear what others think when they post such stuff. Unfortunately, few do -- likely because they are worried about future access issues and the like.

Lou:

With all due respect -- the depth of quality design is a good bit more than just the top 50 and then believing that the next 100 or 150 courses are really that close to one another.

Lou, people can convince themselves of anything if given the opportunity. From the travels I have made over the last 20 years I have seen clearly how the bar for quality design has elevated itself. Frankly, plenty of the so-called "classic" courses have been propped up because of tradition when they should have been tossed aside as creaky dustry relics from years ago.

That's why a number of smart classic courses have seen fit to polish what they have. Places like Plainfield come quickly to mind -- there are a number of others. Unfortunately, the thinking (shall I call it that) refuses to push up those who are deserving and continues to salute those who have bastardized their gems (see ANGC as case study #1).

Lou, the "some movement" you mentioned happens in reverse.

That's the ironic thing involved here.

I will say this again -- there is no perfect system. It is subjective. But I learn a good deal from those who have some insights to provide. I don't see the Digest list as being insightful at all.

Regarding Sahalee and Eugene -- I see them benefiting from the home town elevation points they receive. They are wonderful for their respective areas -- they are not national outstanding courses, at least from the ones I have played. Unfortunately, too many people erroneously believe that the best of their area can then be transformed to the national stage of the top tier greats. That's not the case and you know this.

On the flip side -- there are those who forever "tag" a location of the country as being woeful for golf design -- Texas had the tag and may still have it with many others.

Being a rater means having the capacity to think beyond the predictable maintenance of the same old characters. Like I said before -- I learn more when individuals expose their own listings than getting some sort of hodge podge of courses thrown together.

TEPaul

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #10 on: March 31, 2009, 11:47:34 AM »
"Frankly, I would much rather see the mindset of one person..."



Matt:

So would I. However, the problem arises if done that way of finding that someone who has real credibility to even be able to educate readers on golf course architecture in a general way that would suit a fairly broad spectrum of legitimate preference and opinion. 

I guess one could say the likes of Hutchinson, Darwin and even Wind once filled that roll pretty effectively.
« Last Edit: March 31, 2009, 11:49:08 AM by TEPaul »

Jeff_Brauer

  • Total Karma: 4
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #11 on: March 31, 2009, 11:52:15 AM »
Matt, is that you? Are you saying something to me?

Actually, I think Lou might have it right - collect the largest samples with the existing system, and have a second level of review - either by taking out high and low numbers, or re-running the numbers using only the longest serving panelists (or ones who have seen the most courses in the last two years) 

I can see continuning tweaking of the magazine rankings, and in fact they do that.  I can also see continuing debate about the results.  What 101st ranked course couldn't find fault with the top 100?
Jeff Brauer, ASGCA Director of Outreach

Matt_Ward

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #12 on: March 31, 2009, 12:03:12 PM »
Jeff:

Yes, are your ears open ?

The range of quality golf courses has expanded and the myth that we only have 40-50 real superstars and the next 150-200 are t-h-a-t close is not really accurate -- at least from the visits I have made. I see a second bubble developing in which the second 50 or so courses have moved away from the outer rim and are not closer -- not in all cases mind you -- to the elite tier of layouts.

More tweaking can be done but frankly the horde mentality of adding raters isn't going to do that. It will simply reinforce old trends and keep the confusion in place as a number of people have pointed out. Digest is stuck on the belief that only private layouts are in the highest of reaches -- much has changed and with each "new" poll they announce it becomes even more clearer to me and others what they are really missing.





TEPaul:

I don't expect there to be "one guru" or "the source."

No doubt when people read certain newspapers like the NY Times and WSJ you will get divergent thoughts. That's fine with me. Instead of polls - one needs to have some real education coming from a variety of key sources.

There won't be a "suit(ing) a fairly broad spectrum of legitimate preference and opinion" because I, and likely others, don't want 100% agreement. Just provide some sort of carefully reasoned opinions. I can respect differences of opinion when I know the person has done their homework.

That is one chief reason why I have always enjoyed "Confidential Guide." I know Doak's comments -- don't have to agree with them all -- but I can come to appreciate his take and from that hopefully improve my own.

Lou_Duran

  • Total Karma: -2
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #13 on: March 31, 2009, 12:28:50 PM »
For me it is axiomatic that one can't rank or prioritize things or courses he has not seen or played.   Panelists do not rate the top 100 courses they've played, but somehow assign a value to those they do relative to some or many they have not.  I suspect that more than a few panelists use the existing list as a "guide" or even the opinions of their trusted colleagues, and, in some respect, this tends to reinforce the status quo.  I understand that the large data base and the nature of statistics are such that many or the errors are self-correcting and that perhaps much movement is unlikely if not probable.

There are various sorts of smoothing techniques.  With today's computing power, I'd think that it might be possible to write highly sophisticated algorithms which might put more value on ratings based on the experience of the rater in addition to the number of ballots an individual course received.  For example, a 10 for Sahalee from a rater who may not have seen but a couple of courses in the Top 10 might be worth somewhat less than from raters who've seen eight (in the Top 10) and have Sahalee as a 6.  Maybe the stat and computer experts here can speak to this.  On the other hand, as some have noted, ratings hardly rise to the seriousness of rocket science or world peace.

Matt,

Your "myths" might be someone else's reality.  I happen to agree with you that there has been a lot of great work done in the last 10 to 15 years.  However, rankings are relative by definition, and I stand by the rough order of magnitude of my numbers.  Based on what I've seen, admitedly a sample probably less than half the size of yours, I have little trouble coming up with my top 20 or so.  It gets progressively more difficult for the next 30 +/-, and if given a test on the next 50, I may not be able to replicate my results (though I would get close).  Perhaps this is a reason why I am not a rater any longer.   :(   

Mike_Cirba

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #14 on: March 31, 2009, 12:41:25 PM »
If the magazines want to improve their listings a thought would be a phone call surprise quiz asking the panelist to name three architecturally-related books they've read.

Perhaps they are all lurkers but I've always been disappointed in how few GD raters actually participate here, because the fellows who are here are add a great deal.  Where's everyone else?

Tom Huckaby

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #15 on: March 31, 2009, 12:45:13 PM »
If the magazines want to improve their listings a thought would be a phone call surprise quiz asking the panelist to name three architecturally-related books they've read.

Perhaps they are all lurkers but I've always been disappointed in how few GD raters actually participate here, because the fellows who are here are add a great deal.  Where's everyone else?

Perhaps living real lives?

Perhaps they don't care for the bashing?

Seems to me the 795 (GD raters who do NOT participate here) are the sensible ones.... we who are in here are like Christians in Mecca.... or Muslims in Vatican City....

 :D

TEPaul

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #16 on: March 31, 2009, 12:55:28 PM »
"There won't be a "suit(ing) a fairly broad spectrum of legitimate preference and opinion" because I, and likely others, don't want 100% agreement. Just provide some sort of carefully reasoned opinions. I can respect differences of opinion when I know the person has done their homework."


Matt:

I couldn't agree more. I didn't mean to say there should be a 100% agreement. That should never be the point or the goal; quite the opposite, in fact, and that is precisely why there are so many different types and styles of golf architecture out there and should always be. That very diversity and diversity of opinion is probably the fundamental strength and interest of golf architecture and even golf itself.

One single truly intelligent golf architectural analyst and critic I would think could be capable of explaining these distinctions in architecture and why they should be.

The only real problem arises (and certainly hangs on) when anyone tries to combine it all into one single golf course to attempt to please everyone and every opinion. That of course is a virtual impossiblity and no competent golf architect should ever seriously consider such a Pollyanish notion!

Mike_Cirba

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #17 on: March 31, 2009, 01:55:30 PM »
If the magazines want to improve their listings a thought would be a phone call surprise quiz asking the panelist to name three architecturally-related books they've read.

Perhaps they are all lurkers but I've always been disappointed in how few GD raters actually participate here, because the fellows who are here are add a great deal.  Where's everyone else?


Perhaps living real lives?

Perhaps they don't care for the bashing?

Seems to me the 795 (GD raters who do NOT participate here) are the sensible ones.... we who are in here are like Christians in Mecca.... or Muslims in Vatican City....

 :D

Huck,

No comment on the Pop Quiz??   ;)

I think if GD requires a playing handicap of 5 or below, it shouldn't be much of a stretch to at least ask these student-athletes to read, as well!  ;D

As far as living real lives, that's no way to be a serious golf course nutjob!   What kind of all-consuming passion is that?!   Seriously...you know we're all mentally disturbed, including me and you.    Why are the sane and boring given a free rating pass?  ;D


Tom Huckaby

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #18 on: March 31, 2009, 01:59:38 PM »
Mike:

I've said too much already.  Aren't I supposed to be on an internet-free island?

But my comments on this place v. the real world stand.

And I'll take the real world any day.

TH

Mike_Cirba

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #19 on: March 31, 2009, 02:02:48 PM »
Mike:

I've said too much already.  Aren't I supposed to be on an internet-free island?

TH

Tom,

Understood...have one on me, please.

Matt_Ward

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #20 on: March 31, 2009, 02:39:27 PM »
Lou:

A weighted formula that you outlined can work but you would need people to really be truthful on the full range of courses they have played.

I said this some time ago but the vast preponderance of people are more likely "regional" panelists -- not national ones.

Digest, just like Golfweek, can easily categorize people but they'd rather just put them into one big bowl and let the numbers speak for themselves.

As you & others can see -- the numbers have certainly spoken for themselves.

Jeff_Brauer

  • Total Karma: 4
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #21 on: March 31, 2009, 03:17:30 PM »
Matt,

Call Obama. He has appointed czars of nearly everything else and maybe he would appoint you czar of golf course rankings. That's what you really want, right?

If the US Govt won't comply and give you the recognition you so truly deserve, then just call some web outlet and create your own.  You can be the Mel Kiper of golf rankings.  Of, call the magazines.  While they have staff in place, with the current mag business climate, if you offered to work for free I am sure they would drop their existing ones in a heartbeat.
Jeff Brauer, ASGCA Director of Outreach

Jim Franklin

  • Total Karma: 0
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #22 on: March 31, 2009, 03:28:08 PM »
If the magazines want to improve their listings a thought would be a phone call surprise quiz asking the panelist to name three architecturally-related books they've read.

Perhaps they are all lurkers but I've always been disappointed in how few GD raters actually participate here, because the fellows who are here are add a great deal.  Where's everyone else?

Perhaps living real lives?

Perhaps they don't care for the bashing?

Seems to me the 795 (GD raters who do NOT participate here) are the sensible ones.... we who are in here are like Christians in Mecca.... or Muslims in Vatican City....

 :D

I got your back.
Mr Hurricane

Mike Hendren

  • Total Karma: -1
Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #23 on: March 31, 2009, 03:41:22 PM »
A minimum of 250 raters on a panel.  Matt Ward - and 249 others to tell Matt how smart he is.

Bogey
Two Corinthians walk into a bar ....

Mark Bourgeois

Re: Can Consensus Ratings Work & How Large a Panel is Really Needed ?
« Reply #24 on: March 31, 2009, 04:43:47 PM »
For me it is axiomatic that one can't rank or prioritize things or courses he has not seen or played.   Panelists do not rate the top 100 courses they've played, but somehow assign a value to those they do relative to some or many they have not.  I suspect that more than a few panelists use the existing list as a "guide" or even the opinions of their trusted colleagues, and, in some respect, this tends to reinforce the status quo.  I understand that the large data base and the nature of statistics are such that many or the errors are self-correcting and that perhaps much movement is unlikely if not probable.

There are various sorts of smoothing techniques.  With today's computing power, I'd think that it might be possible to write highly sophisticated algorithms which might put more value on ratings based on the experience of the rater in addition to the number of ballots an individual course received.  For example, a 10 for Sahalee from a rater who may not have seen but a couple of courses in the Top 10 might be worth somewhat less than from raters who've seen eight (in the Top 10) and have Sahalee as a 6.  Maybe the stat and computer experts here can speak to this.  On the other hand, as some have noted, ratings hardly rise to the seriousness of rocket science or world peace.

Matt,

Your "myths" might be someone else's reality.  I happen to agree with you that there has been a lot of great work done in the last 10 to 15 years.  However, rankings are relative by definition, and I stand by the rough order of magnitude of my numbers.  Based on what I've seen, admitedly a sample probably less than half the size of yours, I have little trouble coming up with my top 20 or so.  It gets progressively more difficult for the next 30 +/-, and if given a test on the next 50, I may not be able to replicate my results (though I would get close).  Perhaps this is a reason why I am not a rater any longer.   :(   

Thanks for the bone, Lou!

http://golfclubatlas.com/forum/index.php/topic,37764.msg778910.html#msg778910
Quote
For starters, one's opinion could be discounted according to how many times they'd seen the course within a time period.  For example, count only third-look reviews.  Or third views count full, second just 67 percent, and first just 33 percent. Or a simple bonus system could be put in place where the first look was at standard or par, then each additional review (provided within a certain time frame plus the course had not been substantially modified) counted an additional 1-2 percent, up to a max of, say, 10-15 percent.

So the first time Bobby Jones saw TOC it would not count nearly as much as the 10th time, meaning that his early views, which by his admission were wrong, would not have damaged the credibility of the rankings.

Similarly, Tom Doak's opinion of TOC would count more than many others', given the time and effort he put in there studying the course and serving as a caddie.

As far as anonymity goes, just have fewer panelists and / or review fewer courses. So what?  It would eliminate all the "random walk" B.S. you get in these lists, anyway, like Pebble rising up and down, which appear to serve no purpose other than to manufacture "news".


Quote
Well, every course does need to be seen -- but everybody doesn't need to see every course.

There are statistical methods that could be used, relatively simple methods, to determine whether one reviewer's first look merits management sending them back for another and / or sending others to look at it, too.

Specifically:
1. The historical accuracy of the reviewer (adjusted for a few things like the fact that it's a first-time look)
2. Baysian analysis (basically, a way to guess how likely the course is to make some sort of list some where -- a first-year stats course is all anyone would need to apply the concept)

Anyway, word of mouth / post is really strong, which for intellectual honesty is a problem (Keynes's "beauty pageant" problem) but does ensure courses of potential significance get seen.  Nobody's missing Chambers Bay.

More importantly, if ratings really are important, then a multiple-look philosophy would do more than just make for a better list.  (Really, who cares about that?)  It would reward designers for getting the details right, for going beyond the dumb / accessible in design and giving us subtlety and sublimity, etc.

Mark