News:

Welcome to the Golf Club Atlas Discussion Group!

Each user is approved by the Golf Club Atlas editorial staff. For any new inquiries, please contact us.


Mark Bourgeois

  • Karma: +0/-0
Re: The Unfortunate Side of Course Rankings
« Reply #25 on: June 29, 2013, 09:58:10 AM »
Mark,
In the 2011 list they used two decimals and had quite a few ties, including for #100. I'd be surprised if the rationale for going to four decimals was anything other than breaking those ties. GolfWeek only uses two decimals, but given that they don't list ties even for courses with the same score, I would guess that they use the next decimal to break the tie.

Even on my personal list, I refuse to have ties. They bug me, for some odd reason. So I can understand why the magazines break them, even if the differences are not statistically relevant. I think anyone who has studied the methodologies realizes that the difference between #76 and #77 is negligible.

Andy, I think we're in agreement on this. There's pretty much no difference between #76 and #77; they are essentially tied but Golf Digest decided they needed to come up with a way to avoid ties. So they just rolled the decimal places out even though no human being is capable of making distinctions at that level of precision.

I just checked the difference between the scores of #50 and #100. It's 3 percent -- excuse me, 2.8803 percent. This seems to me a trifling difference.

What makes all this intellectually dishonest is readers do not understand these quantitative distinctions (or lack of distinctions). They attach to the ordinal number not the cardinal number. This is what Golf Digest wants them to do, otherwise Golf Digest wouldn't do it the way they do it.

It would all be in good fun if the golfing populace were more educated about the methodology, but they're not and so, because it serves Golf Digest to perpetuate the intellectual folderol, situations arise as at Hudson National.

Brad, okay instead of "contrived" how about "false precision:" http://en.wikipedia.org/wiki/False_precision
Charlotte. Daniel. Olivia. Josephine. Ana. Dylan. Madeleine. Catherine. Chase. Jesse. James. Grace. Emilie. Jack. Noah. Caroline. Jessica. Benjamin. Avielle. Allison.

Mark Saltzman

  • Karma: +0/-0
Re: The Unfortunate Side of Course Rankings
« Reply #26 on: June 29, 2013, 10:23:18 AM »
Mark, if I said all scores were justifiable within 0.5 points higher or lower for all categories and all raters. And there are 100 scores entered for all courses, is a 2.83% difference of any significance? If not, why not?

Peter Pallotta

Re: The Unfortunate Side of Course Rankings
« Reply #27 on: June 29, 2013, 10:23:33 AM »
It's like the stats that say the average family has 1.6 children, which of course cannot possibly be -- not one such family exists, nor one such child. And yet I'd bet there is no family in the land that fills out the census form incorrectly or dishonestly by listing either less or more of a child than they actually have.  I'll grant for the sake of argument the absolute integrity and experience and knowledge-base of every single individual GD ranker, and the veracity and value of every number they assign to every ranking category for every course they play -- and this leaves me no choice but to lay the issue of 'false precision' squarely at GD's feet.  (And, since I am no smarter than anyone involved in or running the rankings for the various magazines, I must assume that they too understand the dynamics involved and the false precision that results, but simply don't care about it -- and they don't care for precisely the reason I mentioned earlier, i.e. they know and bank on our obsession with lists and rankings, our seeming need to 'prove' one thing better than another so as to take comfort in the supposed security of the collective/consensus opinion.)

Peter  
« Last Edit: June 29, 2013, 10:34:10 AM by PPallotta »

Anthony_Nysse

  • Karma: +0/-0
Re: The Unfortunate Side of Course Rankings
« Reply #28 on: June 29, 2013, 10:42:13 AM »
 Of the 7 criteria that Golf Digest uses to rate golf courses, “conditioning” is really the only criteria that a Superintendent can be responsible for. Aesthetics, maybe a little? But a Superintendent cannot be held responsible for Shot Values, Resistance to Scoring, Design Variety, Memorability and Ambiance. Too many variables there and outside things than can cause a course to not be at its peak.
  There are just so many variables with conditioning, also. Playing in May compared to playing in September. What if a rater played in your Men’s Member Guest? Of course the course will be at its peak with greens at their firmest and fastest. What is a rater played 2 weeks after aeriification?
  There are so many clubs that are maintained to flawless conditions that will NEVER make the Top 100. What would you rather have if you can have both? Take a club like Sage Valley-They fell out of the Top 100, but not because of conditioning. Should heads roll? Of course not. Rankings are so much of the flavor of the week, especially with the #50-#100 courses. They all jump around and fluctuate several positions nearly every 2 years.
Anthony J. Nysse
Director of Golf Courses & Grounds
Apogee Club
Hobe Sound, FL

Tim Martin

  • Karma: +0/-0
Re: The Unfortunate Side of Course Rankings
« Reply #29 on: June 29, 2013, 12:29:01 PM »
Of the 7 criteria that Golf Digest uses to rate golf courses, “conditioning” is really the only criteria that a Superintendent can be responsible for. Aesthetics, maybe a little? But a Superintendent cannot be held responsible for Shot Values, Resistance to Scoring, Design Variety, Memorability and Ambiance. Too many variables there and outside things than can cause a course to not be at its peak.
  There are just so many variables with conditioning, also. Playing in May compared to playing in September. What if a rater played in your Men’s Member Guest? Of course the course will be at its peak with greens at their firmest and fastest. What is a rater played 2 weeks after aeriification?
  There are so many clubs that are maintained to flawless conditions that will NEVER make the Top 100. What would you rather have if you can have both? Take a club like Sage Valley-They fell out of the Top 100, but not because of conditioning. Should heads roll? Of course not. Rankings are so much of the flavor of the week, especially with the #50-#100 courses. They all jump around and fluctuate several positions nearly every 2 years.


Tony-I was waiting for a Super to chime in and I believe you nailed it. As you stated conditioning is one of 7 factors with Digest and one of 10 factors with Golfweek. You guys are not responsible for what is already in the ground and even tree clearing projects are dependent on the will of greens committees/members and budgets. A Super that has the course well conditioned and is following cues from the greens chairman should hardly be responsible for a move up or down in the rankings and a club that acts on that alone to remove someone certainly has their priorities misplaced.

Andy Troeger

Re: The Unfortunate Side of Course Rankings
« Reply #30 on: June 29, 2013, 12:39:32 PM »
Mark B.,
I do think we agree, but at the same time the whole point of the exercise is to create a list. Sometimes I think Digest could save themselves a lot of grief by not publishing the numbers or the methodology at all and leave it to the readers to take it or leave it. There is no methodology that would create a measurable difference. Digest tries to get so many ballots that it does keep those outlier ballots from making a big difference, but the gaps between these courses are always going to be tiny.

Take Hudson National at #90 and Mayacama at #86 for an example. I've played both in the past 15 months and I gave them the exact same score for GD, down to the tenth of a point (the categories were all different but they added up the same). On my personal list with different weightings Mayacama came out ahead by 0.49 points.

I could skip all the number stuff and tell you that I like Mayacama just a little better and most people wouldn't think anything of it, but if I say I like it 0.49 points better people have a problem because its not really a scientific calculation. And if you told me I could choose to play one course or the other tomorrow I'd probably just flip a coin and be done with it  :D

Bart Bradley

  • Karma: +0/-0
Re: The Unfortunate Side of Course Rankings
« Reply #31 on: June 29, 2013, 02:27:50 PM »
It has always bothered me when the use of the word contrived is used in the same discussion as statistical significance. It eventually gets back to the same kind of discussion as pornography," you know it when you see it". 
As to rankings affecting people's jobs, human nature is not always kind. I don't think this is necessarily a solvable issue nor should it be. What would be more interesting to me is, what are the factors that cause a drift in course rankings over time? Is it Fazio drifting to Doke to now perhaps Hanse, also know as flavor of the month? Is it an effect of location? What is the impact of the economic conditions on the ratings and or Raters?

Brad:

I think the point might be that NOTHING of any significance causes the courses to drift in the rankings....especially those in the bottom half of the Top 100 where the scores are basically the same.  My club, Grandfather, has been as high as 65 and has been out of the Top 100 with basically the exact same overall score...almost no change over time.  That is entirely the point Mark is making.  The system does not actually allow one to discriminate small differences. 

Similar to the initial post, our course fell out of the Top 100 and then we did do some drainage work and it improved our firmness.  We re-appeared in the Top 100 and everyone congratulated each other because they felt the project was the CAUSE for the improved ranking position.  In truth, our score didn't really change...  just a slight difference in our relative position that probably was not caused by our project (just the nearly random vagaries of the flawed system).

Bart

Sean_A

  • Karma: +0/-0
Re: The Unfortunate Side of Course Rankings
« Reply #32 on: June 29, 2013, 07:48:03 PM »
Mark

I was trying to get some sort of definitive answer from Tom Dunne about Links 100 ranking/points.  Below is the cut n' paste of the conversation.

Sean
Tom Dunne - Out of curiosity, of what significance is the score?  For instance, is an 8 meant to be twice as good as a 4?  If there is no significance, why is the info given?

Tom D
The significance of the score is to show relationships. #1 Cypress Point and #2 Pine Valley are within .1 of a point of each other, but the data indicates that there's some daylight between them and #3. Of course, it also shows that after the "super-courses", many courses are tightly bunched. A rank without a score would eliminate that context.

Sean
Tom - Okayyyy, so how much better is .1, .5, 1.0?  To me, this is an alien way to see a course so I genuinely don't understand the relationship of score-quality-ordinal ranking.

Tom D
Sean,

This is the work of a collective in which each voter uses his or her own criteria in assigning rankings to courses, so whatever scores the statistical model spits out should only be interpreted in the most general sense. 

From the original feature:

"At the heart of the methodology is a tool known as logistic regression, or logit for short. In the LINKS100, every course on your ballot competes in head-to-head combat against every other course in the system, generating a Carl Saganesque number of data points. The logit takes that data—wins and losses—and spits out a number, or coefficient, for each course. That coefficient itself changes every time a course appears on a ballot—based on whether you have Pebble Beach, hypothetically speaking, ranked 1st, 10th, or 100th on your list. The bottom line: The bigger the difference between two coefficients, the higher probability that one course is truly better than another." 

Coefficients are converted into scores to create the rankings and to show these relationships. If CPC is a 9.2 and PV is a 9.1, that means there is a slight chance the former is "better" (broadly speaking) than the latter.

Sean
Tom

Okay, we are getting closer.  Using your example, how much of a chance of being better is .1, .5, or 1.0?  For instance, does .1 represent a 50.1 to 49.9 ratio?  Also, how large a gap is actually meaningful?  Is it .1, .5, 1.0...?   I think you can see what I am driving at.  3.4 is meaningless unless it is assigned a value.  In which case, either a value for each point must be set or it is pointless to offer scores because as is they are meaningless.  I mean, I can't tell how much the likelyhood of how much better CPC is over PV if I don't know the value of 9.2 and 9.1.  It could well be and I suspect it is the case here, that the .1 difference statistically is not enough to support any reasonable conclusions.  At what point can we do this?

Tom D
Sean,

No, it's more than that. Based on the difference between the two raw coefficient scores, CPC has a 52.5% chance of actually being "better*" than PV. That difference is probably within the margin of error, but our opinion is that readers don't want to see 1A and 1B. They want "gun to your head, which do your voters think is better?" I had a conversation about this stuff with one statistically-minded former GCAer who strongly believed we should include margin of error data throughout--I personally believe that would be TMI for the vast majority of users. We could readily produce the tiered system that some advocate, as well. Just a matter of making choices about how and how much information to provide.

*"Better" according to 135 panelists and +2600 users. Different groups can and do generate different results.

Ciao
New plays planned for 2024: Nothing