Sweet Lou
Being an Ad man you must realize that perception can be more important thhan reality. The issue with free access and free whatever while at a club leaves the rater open to the perception that he ican be influenced by matters not related to the task at hand. Every rater believes, or at least will publicly state that he is not influenced by access plus. If they are telling the truth, why then does the perception differ? If they aren't telling the truth, why then does perception matter?
For any person objectively reviewing the rater process, there is only one conclusion to draw and that is the system is badly flawed.
Ciao
Sean,
I was a marketing type in my working life only as an integral part of my real estate sales efforts, and do not consider myself as an advertising expert. I did have some financial planning and control responsibilities for several years long ago over a marketing function with large ad budgets, so I did have some interest in how it worked and its effectiveness.
The issue of perception vs. reality as it relates to conflicts of interest has some bearing on the subject, but not all that much outside of this forum and some in the industry. The lists have to be extremely important for the perceived conflict as you and others describe it to be material and I just don't see that they are.
The golf industry has much bigger fish to fry than whether GD or GW gets it wrong because some courses waive a guest fee to attract raters. As in all endeavors undertaken by less-than-perfect humans, there are any number of things that can account for the compilations to be less optimal than they can be. Terry Lavin points to one possible source. Another is one that Matt Ward has alluded to numerours times- the depth and breath of experience in the rater corp. I understand that large sample sizes can, statistically, overcome relative differences in exposure to the top courses, but I still have not worked out in my mind how a person who has only seen 10-20 of the top 100 candidates can provide an accurate ranking (I get that one vote can't affect 50 others much, and that the consistently top 20+ courses have many more votes than that).
Anyways, I can't speak on behalf of the many raters I've known over the last 10+ years, but I can tell you with a straight face that I've never been persuaded that a course is better or worse based on the amount that I've paid (or not), how I've been treated by the course staff, or the state of my game when I played the course.
If the system is "badly flawed" as you claim, then I am sure most of its users discount it accordingly. I am not a national rater, but I defend the process, however imperfect it might be, because I do find it useful and entertaining. I am somewhat curious about just how much importance other non-raters put on it. After all, it is not rocket science or terribly consequential in terms of compelling a golfer to doing something.
BTW, I've been far more disappointed with theatre, movie, and restaurant reviews offered by "professional" critics who were not comped (and, so presumably, unconflicted) than using the ratings to select courses in unfamiliar places. I can only remember playing one dud suggested in a top 100 list, and even that one has many strong supporters. Might it be that when one's preferences aren't reflected in the rankings that we look for sinister reasons why that is so? I mean, it surely can't be our take or preference!