News:

Welcome to the Golf Club Atlas Discussion Group!

Each user is approved by the Golf Club Atlas editorial staff. For any new inquiries, please contact us.


JC Jones

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #50 on: August 16, 2011, 10:19:20 AM »
Dan, The ballots are secret. There is no peer presure or groupthink.

If anything, Brad goes out of his way NOT to influence anyone's opinion. When he talks, it's not about how someone should think, but rather, to get them to think. If he says anything negative about the courses architecture, it's more of a practical matter on the nuts and bolts of construction.  Not an opinion of what he likes or dislikes.

There's is too much disinformation spreading. Ben Sims thread reflects that.

That is a healthy dodge ;) ;D ;D
I get it, you are mad at the world because you are an adult caddie and few people take you seriously.

Excellent spellers usually lack any vision or common sense.

I know plenty of courses that are in the red, and they are killing it.

Michael Wharton-Palmer

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #51 on: August 16, 2011, 10:24:15 AM »
I cannot speak for others...but I rate what I see...even if that means being barred from the number 14 course in the "nation"
So to say that raters are under pressure...perhaps some, but I have never been told what numbers to submit..or..been encouraged one way or the other.

Now what happne sto my scores after they are submitted...I dont know and really dont care.
I believe that I am there to cast my very very humble opinion and those powers to be , do with that what they may.
« Last Edit: August 16, 2011, 10:26:17 AM by Michael Wharton-Palmer »

JC Jones

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #52 on: August 16, 2011, 10:29:38 AM »
I cannot speak for others...but I rate what I see...even if that means being barred from the number 14 course in the "nation"
So to say that raters are under pressure...perhaps some, but I have never been told what numbers to submit..or..been encouraged one way or the other.

Now what happne sto my scores after they are submitted...I dont know and really dont care.
I believe that I am there to cast my very very humble opinion and those powers to be , do with that what they may.

Have you ever had any "clear outliers," were they discarded?
I get it, you are mad at the world because you are an adult caddie and few people take you seriously.

Excellent spellers usually lack any vision or common sense.

I know plenty of courses that are in the red, and they are killing it.

Michael Wharton-Palmer

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #53 on: August 16, 2011, 10:33:02 AM »
My clear outliner would be Whistling Straits....that I really do not rate very highly at all...but whether or not it was discarded I simply dont know?????

Lou_Duran

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #54 on: August 16, 2011, 10:38:45 AM »
Raters are like abortion doctors.  Some do it for the money and some feel like they are providing a noble service.  Many of my best friends in the golfing world are raters.  One of the reasons I never ask a man what he does for a living or if he is a rater is .....

.... that if he is a rater, he'll tell you.  And if he is not, why embarrass him?  ;)

JohnK,

You lament with great regularity that while you pay dues and green fees for your golf, raters, in many cases, get theirs for free.  You believe that dues and greenfee paying golfers are unfairly subsidizing the raters/freeloaders.

As a businessman, you know about capacity utilization and the difference between fixed and variable costs.  You also know that the vast majority of comp rounds consume tee times otherwise unused, and that these are of next to no significance to revenues and costs of the clubs who make the decision to host raters.  (Yes, you can argue that these clubs are bypassing the revenues that the raters would pay if they were charged, but, simultaneously, you can't then charge as you have on several occasions that the primary reason raters exist is to get free golf).

BTW, since we are among friends here, is your situation as a prosperous road bulider that much cleaner?  Talk about "shovel-ready" subsidies!  How many clubs are you now a member of?  Might the "members"/taxpayers providing the largess not have similar qualms?  One big distinction is that members and daily-fee golfers have a choice as to whether they join a club or play golf.  Those of us in Texas cannot make an independent choice on our federal taxes going to pay for asphalt roads in southern IL.  Like with my putting lately, maybe I'm seeing something that's just not there.  Peace brother!  

Brent Hutto,

Perhaps the purity of your research at South Carolina was of a higher level than mine at Ohio State as an undergrad (maybe in line with JC Jones's appraisal of Big Ten institutions), but if we didn't deal with the outliers statistically, the work would have been of even lesser value (other than in terms of securing grants, the lifeblood of the relatively attractive lifestyle of graduate students and PhDs).  Windsorization, as far as I know, is accepted practice, widely used in research.  It helps to remove the bias of the vested or inexperienced rater who gives Torrey Pines or Valhalla a 10 (maybe he works there or it is the best course he has ever played) and Sandpines a 0 (because of the polarity of his views on golf design and the architect).

Adam Clayman,

Two of the best pieces of advice I received from a few "seasoned" raters was to stay within the statistical range and to never, ever stand out.  Unfortunately, both came posthumously.  There are a number of raters who use the prior year's list as the starting point, then give heavy consideration to the views of "those who must be pleased" in massaging their ballot.  Of course, I have no doubt that you take a different, independent approach.

    

Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #55 on: August 16, 2011, 10:52:24 AM »
Lou,

The point I wanted to make is that how you treat the data is mostly determined by what you want to accomplish. If the mean value of the ratings is used, that makes sense under the assumption that there exists one true value for the course and the reason for many ratings is so that rater characteristics or errors will "average out". Eliminating disparate values can avoid a mean that is incorrect under that sort of assumption.

But if you were to assume that a given course will appeal differently to various types of individuals, the ratings that would be "outliers" under the first set of assumptions take on a particular benefit specifically because they are very different. Under this type of assumption you would consider a large number of ratings falling all around the same value to be redundant (or maybe you'd consider that clustering a separate parameter that's informative about the distribution of our posited "types" or species of individual preferences). This type of analysis most certainly does not eliminate disparate values.

From what little I know about the magazine panels, it seems clear that the former set of assumptions are in play and not the latter. Which is certainly fine by me. I just always take care to point out the Cheshire Cat principle that ones proper direction "...depends a good deal on where you want to get to...", not to score any points in favor of one destination or another. Or put less generously, I'm being pedantic.

Tom_Doak

  • Karma: +2/-1
Re: Course Raters- status quo or upset the apple cart
« Reply #56 on: August 16, 2011, 10:58:23 AM »

But if you were to assume that a given course will appeal differently to various types of individuals, the ratings that would be "outliers" under the first set of assumptions take on a particular benefit specifically because they are very different. Under this type of assumption you would consider a large number of ratings falling all around the same value to be redundant (or maybe you'd consider that clustering a separate parameter that's informative about the distribution of our posited "types" or species of individual preferences). This type of analysis most certainly does not eliminate disparate values.

From what little I know about the magazine panels, it seems clear that the former set of assumptions are in play and not the latter.


Brent:

This is very well stated and a great observation.  I may send it to Joe Passov at GOLF Magazine, since I think they are the ones who would assume that it's okay for different courses to appeal to different individuals, while GOLF DIGEST's whole system is more geared to finding courses which fit its own criteria.

Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #57 on: August 16, 2011, 11:16:08 AM »
Tom,

Sounds cool!

I must caution that methodology-wise, assuming that courses vary and that your raters comprise a variety of characteristics which are also important is a tough nut to crack absent any secondary source of information about your raters. In other words, it only works if you have some way of identifying the "types" of raters in your pool and which rater is which type.

There are statistical approaches that would let you get at course-variation and rater-grouping simultaneously without prior knowledge of the the way in which your raters group but in my own experience they tend not to work reliably. But if you can come up with an a priori set of rater "types" it can be a very interesting way of letting your ratings tell a more nuanced story.

TRULY GEEKY POSTSCRIPT

There's even a class of statistical models called Latent Group Modeling (or something like that, it's been a decade since I last saw it) that can actually posit a set of types or species of raters with totally different perspectives but then assign fractional memberships to each rater in one or more of those latent groups. So just to take a top-of-head example you might assume four types of raters:

Adventure/Novelty Seekers
Card and Pencil Types
Eye-Candy Lovers
Match Play Bandits

You round up a couple hundred raters and have them rate a bunch of courses. The analysis provides specific ratings (or rankings) for each course on behalf of each group of raters. But it does not assign each rater into just one group. For a given rater you get a proportion (percentage) representing the extent to which he or she gives ratings reflecting the perspective of each group. So my ratings might produce estimates of:

35% Adventure/Novelty Seeker
4% Card and Pencil Type
48% Eye-Candy Lover
13% Match Play Bandit

while Lou Duran's ratings reflect:

22% Adventure/Novelty Seeker
54% Card and Pencil Type
21% Eye-Candy Lover
3% Match Play Bandit

And then as I said you'd get four different perspectives on each course in the ratings. Each rating summary would represent one of those four underlying "types" of person experiencing the courses. These techniques are not real widely used, mostly because they tend to need data that's well behaved in some fairly squint-inducing fine print kinds of ways. But it's a cool concept and on data it works on, the possibilities are great. Of course you do need a pretty good idea of whether your assumed set of categories are actually in play for your particular dataset (that's often the real rub).

John Kavanaugh

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #58 on: August 16, 2011, 11:48:52 AM »
Lou,

All I know is that several raters quit paying dues because they became raters and preferred to access golf through that method. I have zero problem with anyone, rater or not, getting free golf as long as they support the fixed golf expenses at another course through dues.  This way they are paying for these empty tee times you so cherish. It is a metaphysical quid pro quo.  The simple fact is that golf can survive without Golfweek or their rater corps which is not true of the latter.

As soon as I am done dropping this load at the Michigan City Culver's I am going dark on this subject for a minimum of seven days out of respect to my hosts.  Who are these people starting all these rater threads?  It disgusts me.


Lou_Duran

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #59 on: August 16, 2011, 12:23:35 PM »
Lou,

The point I wanted to make is that how you treat the data is mostly determined by what you want to accomplish. If the mean value of the ratings is used, that makes sense under the assumption that there exists one true value for the course and the reason for many ratings is so that rater characteristics or errors will "average out". Eliminating disparate values can avoid a mean that is incorrect under that sort of assumption.

But if you were to assume that a given course will appeal differently to various types of individuals, the ratings that would be "outliers" under the first set of assumptions take on a particular benefit specifically because they are very different. Under this type of assumption you would consider a large number of ratings falling all around the same value to be redundant (or maybe you'd consider that clustering a separate parameter that's informative about the distribution of our posited "types" or species of individual preferences). This type of analysis most certainly does not eliminate disparate values.

From what little I know about the magazine panels, it seems clear that the former set of assumptions are in play and not the latter. Which is certainly fine by me. I just always take care to point out the Cheshire Cat principle that ones proper direction "...depends a good deal on where you want to get to...", not to score any points in favor of one destination or another. Or put less generously, I'm being pedantic.

I should know better than to engage a professional on a subject that I am, at best, a mildly interested layman (I'm sorry to confess that I spent much of my time in quant and stats imagining instead that I was in the superior learning environment of a Doak 5 course a few miles away).

Pedantic?  Nah.  Making a distinction without a difference?

Of course, you are 100% right about polls, data, and manipulation of information to achieve the desired results.  We are frequently provided links on this site as wise revelation of gospel supporting our most cherished beliefs.  We could engage in endless duels of expert analysis citing fairly common, mechanically-correct statistical methodologies without reaching agreement.  Among the biggest fallacies in research, particularly outside the physical sciences, is the underlying caveat- ceteris paribus.

The thesis of this thread is rather mundane, surely falling under central tendency.  I was unaware that "Golf" attempted to come up with a composite based on disparate preferences or criteria.  Does that mean that it provides no guidance to its raters and somehow aggregates the data?  Seems like this would take some serious manipulation of the information.  How many outliers would it take for any semblance of statistical significance?  What type of subjective analyis of the outliers (and what definition of what is an outlier) would be necessary to draw useful inferences?

Ratings are not rocket science.  I don't have the statistical horsepower to "prove" that offseting errors and variances are sufficient to make them useful, but for my purposes, that explanation is (sufficient).  Is Cypress Point or Pine Valley the #1 course in the U.S.?  Who knows?  I like to think that they are in the relevant range of, say, 1-10, and that's close enough for me.

JK,

Just because I recognize that tee times, like electricity, if unused are gone and produce no revenues, it doesn't mean that I relish the reality.  Actually, beyond the agronomical benefits of light traffic, perhaps it also adds considerable intrinsic value to the members in terms of convenience and exclusivity.  My only observation is that clubs which choose to host raters do so voluntarily with little negative effect on the bottom line- or perhaps you know some members as well who dropped out of their clubs because of too many comp rounds.

Many people drop memberships for various reasons- the economy, family responsibilities, declining skills, changes at their club, etc.  I suspect that the number of raters who have dropped out from private clubs in favor of free golf is as infinitisimally small as the percentage of my tax dollars ending up in your pocket for working in Michigan City.  Of course, I could be wrong.

BTW, your disgust is as genuine as the Washington elites running against Washington.  Precious!
« Last Edit: August 16, 2011, 12:28:08 PM by Lou_Duran »

Anthony Gray

Re: Course Raters- status quo or upset the apple cart
« Reply #60 on: August 16, 2011, 02:13:26 PM »


  Is it a negative for raters to have preferances?

  Anthony


Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #61 on: August 16, 2011, 02:25:46 PM »
Is it a negative for raters to have preferances?

Any reasonable system will allow for the fact that individual raters have different preferences. Presumably the magazine panel systems deal with this by a combination of training the raters to put aside their personal preferences along with averaging across a sufficiently wide range of individuals so that the residual per-rater influence (whatever is left after the training) can wash out. And if necessary clean up any "outliers".

Of course it is an empirical question as to whether a particular system accomplishes this goal or not. Again we can presume that the folks setting up and running the system are evaluating that question from time to time and adjusting as needed.

In any case, such a system assumes that individual-rater preferences are a nuisance effect to be systematically minimized so that the True Rating for each course can emerge...which assumes that a True Rating exists and that someone understands what it is. That is not an empirical question at all but rather a philosophical or epistemological one.

Anthony Gray

Re: Course Raters- status quo or upset the apple cart
« Reply #62 on: August 16, 2011, 02:33:18 PM »
Is it a negative for raters to have preferances?

Any reasonable system will allow for the fact that individual raters have different preferences. Presumably the magazine panel systems deal with this by a combination of training the raters to put aside their personal preferences along with averaging across a sufficiently wide range of individuals so that the residual per-rater influence (whatever is left after the training) can wash out. And if necessary clean up any "outliers".

Of course it is an empirical question as to whether a particular system accomplishes this goal or not. Again we can presume that the folks setting up and running the system are evaluating that question from time to time and adjusting as needed.

In any case, such a system assumes that individual-rater preferences are a nuisance effect to be systematically minimized so that the True Rating for each course can emerge...which assumes that a True Rating exists and that someone understands what it is. That is not an empirical question at all but rather a philosophical or epistemological one.

  So how can the rating system be improved? Have more raters?

  Anthony


Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #63 on: August 16, 2011, 02:39:35 PM »
As a statistician I'd say that the large magazine panels are pretty damned big already (how's that for technical jargon). To the extent that they might include a very wide range of golfers then maybe they could stand to be even larger. But my sense is they are thinking of raters as pretty much of a muchness, a fairly large group of somewhat similar individuals. If that's the case then they're probably past the point of diminishing returns for the courses which get lots of rater play and have done so over some substantial period of time.

The problem I'd imagine they do have is getting enough ratings of courses that either do not encourage rater play or ones that raters for whatever reason do not care to visit. In that case it's less a matter of getting more raters who don't play those courses as of finding a way to get their existing rater pool to play there more.

Greg Tallman

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #64 on: August 16, 2011, 02:41:40 PM »
Correction....Golfweek raters do NOT pay an annual fee...

Michael,

Assuming you sit down and plan an annual budget. Can you make the statement that you do/would not include any monies earmarked for GolfWeek given your position as a panelist?

By the way don't forget to fix Brad's little agenda regarding a certain course in Mexico!  ;)

David Cronheim

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #65 on: August 16, 2011, 02:45:29 PM »
Mike,

Did you go to Cornell?

It was coin flip between Cornell and MSU. I picked RTJ over Art Hills !

http://www.golfmsu.msu.edu/about

Glad to see another Cornellian - I went there both for undergrad and law school.
Check out my golf law blog - Tee, Esq.

Anthony Gray

Re: Course Raters- status quo or upset the apple cart
« Reply #66 on: August 16, 2011, 02:50:34 PM »
As a statistician I'd say that the large magazine panels are pretty damned big already (how's that for technical jargon). To the extent that they might include a very wide range of golfers then maybe they could stand to be even larger. But my sense is they are thinking of raters as pretty much of a muchness, a fairly large group of somewhat similar individuals. If that's the case then they're probably past the point of diminishing returns for the courses which get lots of rater play and have done so over some substantial period of time.

The problem I'd imagine they do have is getting enough ratings of courses that either do not encourage rater play or ones that raters for whatever reason do not care to visit. In that case it's less a matter of getting more raters who don't play those courses as of finding a way to get their existing rater pool to play there more.

  I wonder how many different courses they play a year. And for some do the actually even play it rather than tour it.

  Anthony


Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #67 on: August 16, 2011, 02:55:39 PM »
Anthony,

The only ratings I've seen the innards of are on a much smaller scope than the big, national magazine panels. But in that case, it varies widely from course to course and rater to rater. Some raters play a whole lot of courses per year, other much fewer. Just as some courses get only a few rater plays and others get many.

Greg Tallman

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #68 on: August 16, 2011, 03:03:07 PM »
As a statistician I'd say that the large magazine panels are pretty damned big already (how's that for technical jargon). To the extent that they might include a very wide range of golfers then maybe they could stand to be even larger. But my sense is they are thinking of raters as pretty much of a muchness, a fairly large group of somewhat similar individuals. If that's the case then they're probably past the point of diminishing returns for the courses which get lots of rater play and have done so over some substantial period of time.

The problem I'd imagine they do have is getting enough ratings of courses that either do not encourage rater play or ones that raters for whatever reason do not care to visit. In that case it's less a matter of getting more raters who don't play those courses as of finding a way to get their existing rater pool to play there more.

  I wonder how many different courses they play a year. And for some do the actually even play it rather than tour it.

  Anthony



I think you'd be surprised. Quote from one of the newer panelists, actually 2... "We have 23 more courses to play to have played the entire list... should have that wrapped up by next spring"

Others... played the entire top 100 by age 35 or so

played 3750+courses

and crazy panelists sits down and ranks individually his courses from 1 to roughly 1200... he is crazy... and I will be leaving GCA now  ;)

Lou_Duran

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #69 on: August 16, 2011, 03:07:48 PM »
Talk about far out, a human being without preferences!  The quintessential "independent".

Brent,

I've been thinking for awhile about a rating system with or without formalized criteria which would ask the participants to provide their ordinal ranking (1 to whatever) of the courses they played on some finite list of candidates.  The facilitator would have a computer analyze the data based on a set of Solomic heuristics (the trick)- millions of iterations comparing how often, by whom, and in what order the courses appeared with some mechanical form of weighting- to arrive at the final ranking.  I am assuming that the larger the number of "informed/discerning" participants, the better the results.

Questions: can such a system be designed (say 1000+ individual rankings of 10 to 200+ courses)?  Would the programming be subject to similar perception and social flaws as the current methodologies?  There are probably problems with internal correlation, but I think it might be a superior approach than having each individual rater submit a ballot on a widely varying slate of courses, all having three 10s, four 9.5s, 5-6 9s, an increasing number of lower values as one proceeds to the center of the candidate courses' Bell Curve.  Does any of this make sense?        

Michael Wharton-Palmer

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #70 on: August 16, 2011, 03:25:54 PM »
Correction....Golfweek raters do NOT pay an annual fee...

Michael,

Assuming you sit down and plan an annual budget. Can you make the statement that you do/would not include any monies earmarked for GolfWeek given your position as a panelist?

By the way don't forget to fix Brad's little agenda regarding a certain course in Mexico!  ;)


Greg..
Absolutely I can say that...
I subscribe to golfweek, golf world, gold digest, golf magazine and Golf International...by the way the best of the bunch.... and do so as a non rater for all those other magazines.
As far as I am concerned I get no kick backs that influence my ratings whatsoever...I rate what Isee and in all honsety I like what I like and dislike what I dislike.

But I will try and change the apparent mistake in the cabo area ;)

Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #71 on: August 16, 2011, 03:29:00 PM »
Yes, that approach has some advantages over numeric ratings. There's a whole branch of statistics dealing with ranks rather than continuous quantities. Unfortunately it's been quite a long time since I took my obligatory one-semester course and I almost never encounter that type of data in my current field. But it's all quite well worked out and generally speaking there is a "ranks" equivalent of all the means, standard deviations, T-tests, correlations and so forth that we're more familiar with.

It has two advantages that come to mind. One is there's no assumption made about the distribution of scores (ratings) either in general or by any one rater. On a mechanical basis this eliminates certain problems caused when various raters distribute their scores differently (let's say one guy really lards his list with 9.5's and 10's while another spreads them out with more 9's and 8's and relatively few top scores). In parametric statistics, combining scores from different distributions doesn't always result in a "fair" combination being computed.

And on a philosophical basis it can kind of force each rater to pin down the tough choices. You can't just lump another 9.5 in there with the half dozen you've already assigned. You have to say whether it's better or worse than the others. That said, in most real systems the scores are allowed to have ties, alas. Which avoids imposing my rather bracing sense of "nail it down" on the raters. Probably to the good, all told.

But anyway, such systems are pretty much plug-n-chug. Built into all the usual stats software and such.

RSLivingston_III

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #72 on: August 16, 2011, 03:41:57 PM »

I am not sure what the multiple rater threads are accomplishing other than boosting post counts (rankings(?)).
"You need to start with the hickories as I truly believe it is hard to get inside the mind of the great architects from days gone by if one doesn't have any sense of how the equipment played way back when!"  
       Our Fearless Leader

Brent Hutto

Re: Course Raters- status quo or upset the apple cart
« Reply #73 on: August 16, 2011, 03:46:52 PM »
Ralph,

Don't know about the other threads but in this one I was rather enjoying a discussion of how you can conceptualize a ratings process and how the details of compiling the ratings can reflect or even influence that conceptualization. Admittedly off topic from the charter of this forum which after all is about golf courses and not the ratings process. But it beats speculating on the state of Tiger's left knee.

Lou_Duran

  • Karma: +0/-0
Re: Course Raters- status quo or upset the apple cart
« Reply #74 on: August 16, 2011, 04:07:02 PM »
Unless one is concerned about using Ran's bandwidth, why are folks bothered about where the discussion goes?  Self-discipline and selective perception are not being unduly challenged here.  Don't like the subject or the poster?  Don't click on the thread or page down quickly.  Simple.  No?

But anyway, such systems are pretty much plug-n-chug. Built into all the usual stats software and such.

Thanks Brent.  Are there easily modifiable canned models that would allow me to do this?  Say I had 10 raters that would give me their individual rankings.  Is there an existing program that would give me a composite "best fit" list of these?  I'd like to play with, preferably if it is in Excel.
« Last Edit: August 16, 2011, 04:10:48 PM by Lou_Duran »

Tags:
Tags:

An Error Has Occurred!

Call to undefined function theme_linktree()
Back