Welcome to the Golf Club Atlas Discussion Group!Each user is approved by the Golf Club Atlas editorial staff. For any new inquiries, please contact us.
Regarding NGLA's Redan, I was reading over articles from 1907 and one of them provides straight line elevation changes for the planned holes at NGLA. The Redan (then the thirteenth) is listed 43 to 31 feet. So a 12 foot drop. Assuming the tee is in the same place . . . it is easy to admit that Tom Doak (5-10 ft) had it about right, but harder to admit that Patrick (10 ft.) had it about right, too. ______________________________________________________________________When we debated this many years ago, Patrick was WRONG, arguing, as only he can that the hole was uphill rather than downhill. "Morons" come in many colors, and often very unexpectedly......
PatSo instead, they use someone else's subjective criteria subjectively The criteria aren't subjective.It's the evaluations that are subjective.You can't ask a panel of a hundred or a thousand raters to evaluate a course with a hundred or a thousand different criteria, there'd be chaos rather then consensus. There has to be uniformity and consistency in the evaluative process, especially when your panelist have varying ideas and abilities Not good my friend.Yes, it is.If you didn't have defined perameters/categories you'd never be able to establish a pragmatic system to evaluate hundreds upon hundreds of courses based upon the collective response of hundreds of raters Ciao
Quote from: Sean_A on May 08, 2014, 07:47:15 AMPatSo instead, they use someone else's subjective criteria subjectively The criteria aren't subjective.It's the evaluations that are subjective.You can't ask a panel of a hundred or a thousand raters to evaluate a course with a hundred or a thousand different criteria, there'd be chaos rather then consensus. There has to be uniformity and consistency in the evaluative process, especially when your panelist have varying ideas and abilities Not good my friend.Yes, it is.If you didn't have defined perameters/categories you'd never be able to establish a pragmatic system to evaluate hundreds upon hundreds of courses based upon the collective response of hundreds of raters Ciao
Suffice it to say, I disagree with you. You're entitled to be wrong.I would much rather be more discerning in picking the panel rather than trying to train the Toms, Dicks, and Harrys. So, you would be "more discerning in picking" the individual panelists ? How ? In what way ? What do yo know about the quality of the existing individual panelists ?America is a big place and you need a lot of raters to see all of the courses, so tell us how you would replace the existing raters with "more discerning" raters Be that as it may, where lists really diverge is in the 150-300 range. I am afraid there will never be a good enough panel to figure that stuff out. This of course begs the question as to why have lists at all? If any group get together and come up with quite similar top 100-150 lists (and they are far more similar than not), whats the point unless a list is only produced every decade? Why do you think that the "group gets together" ? ? ?There is no "getting together". Each panelist submits their evaluation after they play/evaluate the course.It sounds like you don't have a clue as to how the process works.I am a broken record, but this is why I much prefer to hear about favourite courses. Then tell us how you would detect and discount regional bias.