News:

Welcome to the Golf Club Atlas Discussion Group!

Each user is approved by the Golf Club Atlas editorial staff. For any new inquiries, please contact us.


Tom_Doak

  • Karma: +2/-1
GOLF DIGEST by the numbers
« on: April 02, 2009, 09:06:05 PM »
In order to try and make some little sense of the GOLF DIGEST rankings, I thought I would look at the two courses of mine that made the top 100 and see how the panelists scored them.  And here's the one that made me smile:

RESISTANCE TO SCORING

Pacific Dunes 8.05
Sebonack 7.93

I submit that there is not a single person alive who believes that Sebonack is easier to score on than Pacific Dunes.  I KNOW it isn't.

So how does DIGEST come up with numbers like that?  Here's how:

1)  Very few of the same people rated both courses.  The people who scored Sebonack below an 8 must have come straight from Oakmont or Pine Valley; the people who scored Pacific Dunes above an 8 must have played it in a good wind.

2)  I believe that most DIGEST raters decide in advance what their general impression of a course is [i.e. it's a 7 or an 8 or a 9], and then they make up their scores for the various categories off that baseline.  So for Pacific Dunes, they think it's between an 8 and a 9, so they give it 8's across the board with a couple of 9's (for aesthetics and ambience).  For Sebonack, they think it's not as good as Pacific Dunes, so they give it pretty much straight 8's.  [Check out the scores ... 7.91, 7.93, 7.95, 7.98, 7.88, 7.97 for six of the seven categories, with an 8.31 for Aesthetics, which was obviously Jack's input.]

Bottom line, I think all the numbers are a farce designed to present the illusion of serious study.  I would love to hear a couple of DIGEST panelists tell me it's not so, and provide examples of their voting to back it up.

Ronald Montesano

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #1 on: April 02, 2009, 09:14:02 PM »
You made my night.
Coming in 2024
~Elmira Country Club
~Soaring Eagles
~Bonavista
~Indian Hills
~Maybe some more!!

Mike_Cirba

Re: GOLF DIGEST by the numbers
« Reply #2 on: April 02, 2009, 09:19:53 PM »
Tom,

The methodology reminds me of the scene in "Dead Poet's Society" where Robin Williams points his class to the chapter in their books where some noted poetry expert has devised a strict mathematical system to "rate poetry".

After explaining the theory briefly, Willams as the professor then instructs his class to tear the chapter from their book, lest their young minds be forever contaminated.

BCrosby

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #3 on: April 02, 2009, 09:35:54 PM »
The only rankings that make less sense than golf course rankings are dog shows.

That's because in dog shows you are ranking collies against poodles against beagles.

Hey, wait a minute. Why is that any nuttier than ranking NGLA against TPC Sawgrass against Firestone?

Ok, I was wrong. I've changed my mind. Ranking golf courses and ranking dogs are equally nutty. Neither makes any sense.

Bob  

Jim Colton

Re: GOLF DIGEST by the numbers
« Reply #4 on: April 02, 2009, 09:36:10 PM »
Tom,

  I think the categories are the flaw in the GD system.  It assumes that Whitten knows the right components and the right formula that makes for a quality golf course.  Why not let the panelists come up with an overall rating for each course that comprises each rater's inherent beliefs on what's important versus not important?  GD trusts them enough to be able to accurately quantify 'Shot Values', why not trust that they can discern whether they like one course more than another?   With 900+ raters, you'll get some that value Shot Values more heavily and some make value Aesthetics, but in aggregate you'd get a weighted-average of what's truly important instead of a simplified 2+1+1+1+... approach.

  You hit on the other flaw I see in the rankings...the value in all of these panelists ratings is not in their categorical imputs, but in each panelist's relative ranking of the courses they've played.  There's a lot of good information that they are collecting from these panelists, I just don't think they are mining it correctly.  Unfortunately, it throws off the overall ratings and it makes the GD panelists look bad...the gut reaction is that the GD panelists must be idiots.  I doubt that is the case.  The GD panelists I know take the responsibility seriously.

  Tom, I know you know Ron...I'd be more than happy to volunteer to help them try to make better sense of the information they are getting to spit out a ranking the truly reflects the collective view of their panelists.
« Last Edit: April 02, 2009, 09:37:55 PM by Jim Colton »

Tom_Doak

  • Karma: +2/-1
Re: GOLF DIGEST by the numbers
« Reply #5 on: April 02, 2009, 09:50:53 PM »
Jim:

I've been telling Ron Whitten different versions of the same thing for 20 years.  [Incidentally, he doesn't deserve much of the blame for this -- he divorced himself from the ratings process a few years back, Topsy Sidwrowf is the editor of the rankings now, and Ron just writes the accompanying article.]

I get the feeling that it's the other way around from what you said.  They are hosting "Panelist Summits" and the like to try and educate their panelists about their definitions, because they trust THE DEFINITIONS and not the panelists.  And really, with 900 geographically-diverse panelists needed to fuel the numbers machine, there is no way they can just go with people whom they trust.

But they are not going to switch over to the GOLF MAGAZINE way of doing things, because that would be admitting error, and they are the leaders in rankings!

Mark Bourgeois

Re: GOLF DIGEST by the numbers
« Reply #6 on: April 02, 2009, 10:14:01 PM »
Tom

But doesn't Golf Magazine rig the deck, too?  My reading of how they do it is that the position of the course in an individual's list receives a weighting.  Top 3 get a weighting of 100, 4-10 of 85, etc.  Who decides that a top 3 listing is 15 "units" better than 4-10?  And how did they come up with that?  What's the rationale that explains why #3 on a list is, what, 17.6 percent more important than #4?

And anyway we judge a tree by the fruit it bears:

Top 5, GD
Augusta
Pine Valley
Shinnecock
Cypress
Oakmont

Top 5, GM
Pine Valley
Cypress
Augusta
Pebble
Shinnecock

Looks to me like both trees produce apples.  Or something like apples...

Mark

EDIT: An improvement on Golf's methodology IMHO would be if the panelists were allowed to assign their own weights, ideally in a bounded, "hundred pennies" type of exercise.
« Last Edit: April 02, 2009, 10:20:04 PM by Mark Bourgeois »

Rob Rigg

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #7 on: April 02, 2009, 10:18:52 PM »
The GD Rankings are clearly broke, and they need to fix it.

In theory, 900 people ranking may smooth out errors but if they are regionally based then you have people comparing courses completely out of context.

Ideally, a magazine, or whomever, would have 10 or 20 raters who have basically played all the top courses in the country. As new courses or renovations come online they get out to see 3 or 4 of them each per year.

Get rid of the people who want to be raters but don't have the experience and let a group of super raters get the job done.

Jim Colton

Re: GOLF DIGEST by the numbers
« Reply #8 on: April 02, 2009, 10:56:58 PM »

I get the feeling that it's the other way around from what you said.  They are hosting "Panelist Summits" and the like to try and educate their panelists about their definitions, because they trust THE DEFINITIONS and not the panelists.  And really, with 900 geographically-diverse panelists needed to fuel the numbers machine, there is no way they can just go with people whom they trust.


Well, if this is the case then there really is no hope.  I guess we'll keep having these same threads every two years.

John Mayhugh

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #9 on: April 02, 2009, 11:02:50 PM »
The only rankings that make less sense than golf course rankings are dog shows.

That's because in dog shows you are ranking collies against poodles against beagles.

Hey, wait a minute. Why is that any nuttier than ranking NGLA against TPC Sawgrass against Firestone?

Ok, I was wrong. I've changed my mind. Ranking golf courses and ranking dogs are equally nutty. Neither makes any sense.

Bob  

The problem with dog shows is that they exclude dogs that combine multiple breeds.  OK that's just one of the problems.

John Kirk

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #10 on: April 02, 2009, 11:18:38 PM »
Sebonack is really tough, probably worthy of a difficulty rating of 9.

Pacific Dunes is easier, but not an easy course in the grand scheme of things.  For the sake of these ratings, it's a 7 1/2.  7 is too low and 8 is too high.  Do they allow 7 1/2 as a rating?

The original Golf Digest list was a list of the hardest courses in America, which may be the reason for the categorized approach.

Good topic.

Andy Troeger

Re: GOLF DIGEST by the numbers
« Reply #11 on: April 02, 2009, 11:50:15 PM »

Bottom line, I think all the numbers are a farce designed to present the illusion of serious study.  I would love to hear a couple of DIGEST panelists tell me it's not so, and provide examples of their voting to back it up.


Tom,
Certainly there are panelists that do what you describe--I'm not going to pretend otherwise. At the same time, most of the raters I know personally take things very seriously and do put significant thought into their ratings. Obviously among 900 panelists its hard to say just how much variation occurs with all that. I can give you examples of my own ratings offline if you like--I have a course where I gave scores ranging from an 8.7 in one category to a 2.2 in another.

Mike Nuzzo

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #12 on: April 02, 2009, 11:58:23 PM »
Tom,
At least when someone says Pacific is too easy you can say it's harder than Sebonack.

Andy,
Tom is saying that there aren't 900 qualified individuals to discern the differences between the top 100 and especially in all those categories - even if they do try very hard and take it very seriously.


Thinking of Bob, Rihc, Bill, George, Neil, Dr. Childs, & Tiger.

jkinney

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #13 on: April 03, 2009, 12:10:57 AM »
Sebonack is really tough, probably worthy of a difficulty rating of 9.

Pacific Dunes is easier, but not an easy course in the grand scheme of things.  For the sake of these ratings, it's a 7 1/2.  7 is too low and 8 is too high.  Do they allow 7 1/2 as a rating?

The original Golf Digest list was a list of the hardest courses in America, which may be the reason for the categorized approach.

Good topic.

Captain Kirk, it's good to see you beaming yourself up! GD has a tough time melding hard & fair together, IMO. It's only when the two are in balance that a course becomes great - Shinny being the classic example. Since we're talking Pacific Dunes and Sebonac, I will say that I rank PD and BALLYNEAL equally, with Sebonac a WAY distant third. "Bally what ?", ask some. I've already risked my life once today by mentioning that certain course in Palm Desert (where I suspect you will win this weekend's club championship), and was grateful to receive support from Matt Ward for doing so. But press my luck I won't.

Richard Choi

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #14 on: April 03, 2009, 12:18:54 AM »
This really is not the fault of the raters, it is the fault of the rating system. People are doing what they are asked to do in the best way they know how. I think it is bit unfair to keep denigrating the raters.

As I have said before you ask poor questions, you are going to get poor answers.

The example that Tom brings up is a pretty classic example of flaws in general subjective poll questions. However, there are ways to address those issues so that the things are evened out. For example, you can create a map of all the courses that overlap between raters and use those rating adjust that persons rating up/down compared to others. It is fairly computational heavy, but it is not that complicated. You can also include a question that serves as a calibration (usually a better way to go).

To me, for any magazine to take on a task like this, you cannot just rely on magazine editors who have no background in statistics. They should hire some polling experts not just to throw out some outlying data points, but to design the questions from scratch so that you have enough data points to over biases and lack of consistency between raters/regions.

I would think if you approached some professors and universities, you will find someone who would be willing to help.
« Last Edit: April 03, 2009, 12:24:53 AM by Richard Choi »

Richard Choi

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #15 on: April 03, 2009, 12:20:15 AM »
dupe

Jim Colton

Re: GOLF DIGEST by the numbers
« Reply #16 on: April 03, 2009, 12:57:18 AM »
This really is not the fault of the raters, it is the fault of the rating system. People are doing what they are asked to do in the best way they know how. I think it is bit unfair to keep denigrating the raters.

As I have said before you ask poor questions, you are going to get poor answers.

The example that Tom brings up is a pretty classic example of flaws in general subjective poll questions. However, there are ways to address those issues so that the things are evened out. For example, you can create a map of all the courses that overlap between raters and use those rating adjust that persons rating up/down compared to others. It is fairly computational heavy, but it is not that complicated. You can also include a question that serves as a calibration (usually a better way to go).

To me, for any magazine to take on a task like this, you cannot just rely on magazine editors who have no background in statistics. They should hire some polling experts not just to throw out some outlying data points, but to design the questions from scratch so that you have enough data points to over biases and lack of consistency between raters/regions.

I would think if you approached some professors and universities, you will find someone who would be willing to help.

Richard,

  You don't need profs...I already volunteered to help.

  This reminds me of an ongoing problem with college basketball...if you follow college hoops you know the NCAA Selection Committee uses the RPI to guide in the selection process, but the RPI is an extremely blunt formula and it can cause some misleading results.  About 5 years ago, they changed the formula to reward winning on the road and penalize losing at home...makes intuitive sense but they did it in such a way that made the revised formula even worse than the old one, which was really saying something because the old one was horrible to begin with.  I tried to point this out to a few Chairman of the Selection Committee's, but they lived in ignorant bliss that their formula was new and improved.  The Chairman that implemented the change even told me they consulted professors and statisticians from the nation's leading institutions.

  The problem with the golf course ratings is actually pretty similar to the issue with college hoops...how do you rank when each team plays a different schedule?  Or how do you rank golf courses when each panelist has played a different subsegment of the eligible golf courses?   Ignoring this fact would be like simply selecting the 64 teams with the best win-loss record to make the tournament.

Richard Choi

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #17 on: April 03, 2009, 01:10:02 AM »
Jim, while I do agree that RPI is not perfect, the fact that the selection committee has relied more and more on RPI rankings have resulted in fewer upsets in NCAA tournament (this year is a great example where all 1, 2, and 3 seeds made the Sweet Sixteen).

And while we cannot force NCAA teams to play certain opponents to make the RPI more accurate, we CAN force raters to play certain courses.

It is pretty simple to address at least the problem Tom has stated on top. All you have to do is to require that the raters MUST play 5 to 10 calibration courses before their ratings are accepted. Either you can do this nationally, or at least do it regionally and then have "super" raters who must play the national calibration courses.

The data you collect from the calibration courses are used to create individual rater calibration so that everyone is on equal footing.

It would be relatively simple to do, and it should up the quality of the rankings.
« Last Edit: April 03, 2009, 01:12:12 AM by Richard Choi »

Jim Colton

Re: GOLF DIGEST by the numbers
« Reply #18 on: April 03, 2009, 01:17:07 AM »
Jim, while I do agree that RPI is not perfect, the fact that the selection committee has relied more and more on RPI rankings have resulted in fewer upsets in NCAA tournament (this year is a great example where all 1, 2, and 3 seeds made the Sweet Sixteen).


Richard,

  Let's table this discussion until June (have you made your travel arrangements yet?)  I could talk about the college hoops and the RPI for hours.  It could also be why I have very few friends. 

  I don't agree with anything you said above (except for the 'not perfect' part, which is true but still way off)

  Jim

Matt_Ward

Re: GOLF DIGEST by the numbers
« Reply #19 on: April 03, 2009, 01:26:51 AM »
Tom D:

I completely concur on the R to S assessment / re: PD v Sebonack.

The NY-layout doesn't need a hint of wind to play tough. Candidly, Sebonack is right there with Shinnecock Hills in terms of overall demands in my mind.

For Pac Dunes to receive the grades the raters must have played the course in a typhoon.

Tom, you are also right -- Digest likely knows it has flaws. But admitting the deficiencies is more difficult because it would have to admit the clear shortcomings. Yes, RW is not the main person here. The sheer size of the panel makes the consensus formula nothing more than numbers crunching from people who really don't do anything more than spit out same-like numbers across the board. There are good GD raters - but the situation has simply hit a new all-time low with what's masqueraded as America's best.

Richard:

Glad to see you now accept my beliefs in regards to having "super" raters or as I called them national raters of standing. Having people operate at two different levels would help on both fronts -- the woeful state ratings you see now -- I would place a higher emphasis on those who live in the state because they can play them more frequently. Courses eligible for national consideration would need to be played by more than just the hometown group of people -- to minimize the homer factor. A paired down national panel of "super" raters could do what the NCAA selection panel does now for basketball -- it could meet and clearly go through the process in placing them.

One final item for reform -- bag the numerical placements of one after another. Group them in tens as Digest previously did. I don't see how someone can say Cypress is better than Oakmont or Shinnecock is ahead of PV or vice versa. They are all all-world.

Andy:

Digest erred with the thought that having a Yellow Pages listing of raters would do the trick. Adding more and more people doesn't provide more and more quality info. The whole process can be done with a fraction of the number of people involved now. You would improve the info -- thanks to the Internet -- and you would get the kind of details missing now. No doubt there are good Digest raters -- just way too few of them and being lumped together with those who are simply clueless in just what constituted superior golf design.

Mark B:

Quick question - do you see ANGC as a top five course in the USA given what it is today? If so why? If not -- then how far down the listing would you place it?

Richard Choi

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #20 on: April 03, 2009, 01:49:31 AM »
Matt, let's get this clear. My "super" raters are not valued any more than regional raters. They just serve a cross-calibration purpose. They do not have to be more astute or more experienced, they just have to be willing to travel more.

Jim, I am counting the days!!!

But if you want to talk about travesties of computer ranking, RPI is a nirvana compared to the BCS. Did you read the rant against it ny Bill James? I can certainly be very happy discussing the final points of RPI with you for the 3 1/2 hour trip as long as our fellow companions don't mind.

Doug Siebert

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #21 on: April 03, 2009, 03:35:04 AM »
What they ought to do is rate the raters.

Take Jim Colton's idea of just having the raters come up with one number, then use a little statistics on them and boot the guys who are at odds with the majority on many courses.  They can fudge things if they believe that average ratings for certain courses are "wrong" of course, but they can pretty much fudge the system now just by picking who they pick.

The raters will ALWAYS be wrong from our point of view, since what GCA values isn't what the average golfer values.  Fine, they should just let the guys produce a single number rating without the crazy outliers that might move numbers on thinly played courses.  And we may not agree with that number, but it'll be mostly "right" for average golfers.  And that's a magazine for average golfers.

Something like Links Magazine could perhaps try to introduce some ratings that put more weight on architectural merit and less weight on green grass and friendly well endowed beer cart girls.  But expecting this from Golf or Golf Digest is just silly, and I don't know why GCA even bothers to debate this every year.
My hovercraft is full of eels.

Phil_the_Author

Re: GOLF DIGEST by the numbers
« Reply #22 on: April 03, 2009, 05:44:52 AM »
The major problem that I see in how GD does its ratings is that they DON'T trust the raters or their own definitions. If they did, WHY do they have to make ANY ADJUSTMENTS whatsoever to the numbers sent in?

WHY does there have to be any adjustments based upon a statistical aberration? They SUPPOSEDLY choose the raters with great care; why not then believe when Jo Jo Smithy gives Pinehurst #2 a rating of 3's & 4's across the board that he actually is convinced that this is what it deserves? If you can't trust your judges then the judgment they yield is irrelevent.

By throwing out the aberrent scores, two things happen. The first is that those that score on AVERAGE in a smaller range than others that have a wider margin from high-to-low will have a higher adjusted score. Yet the course(s) that have more higher scores offset by more lower scores would seem to be more impressive to raters.

The second thing it does is that it completely undermines the attempt to rate courses as it introduces the worst of all biases; a Golf Digest bias. Consider, many "experts" believe that "brown, firm & faster" is akin to a better course. Yet WHO decided that that is correct? WHY can't a rater DISAGREE with that premise and have his ratings reflect that? Knowning that a disagreement this way will lead to their ratings being ignored and gotten rid of will only lead to raters who no longer judge what they see but send in scores that they believe GD wants them to.

We have this discussion board we all enjoy primarily because it gives us a place to discuss and sometimes argue why each other is incorrect in their opinions. If the requirements for GCA.com was that each participant must judge and believe certain things about golf courses, wouldn't it be a pretty dull place?

No, it is most important that those chosen to judge be trsuted to do so and that whatever their input and numbers are, that they be accepted. Otherwise those judging have NO CREDIBILITY and everything that GD hopes to achieve is simply wasted effort.

Another example of how GD shows bias and lack of trust in their raters. If the criteria of "Shot Values" is so much more important than the others that
the number presented should be doubled, why keep it scored on a 1-10 basis? Why not change it to a 1-20 basis?

Doing so allows a much more accurate rating. How so? Suppose a rater looks at Bethpage Black and rates the conditioning as about 8.4. He must then give it an 8. Doubled that makes it a 16. Yet if he was able to give the true score as 8.4 doubled becomes 16.8, he would by necessity give it a 17.

That extra point is much closer to what the rater believed the course deserved and is far more accurate. Just imagine how many overall rankings would change by an extra point being averaged in. After all, isn't the announced numerical rating carried out to the HUNDREDTH'S decimal place? Look at the rankings and you'll see that it is filled with differences between courses of several or even a single hundredth of a point.

Finally, what is interesting to me, is how many solutions proposed show this same lack of trust; so how can they be valued. A class of national raters whose views are looked at as more accurate is a falacy on its face. As raters must pay their own way, the first and primary qualification for being one can't be knowledge but would have to be can you afford to do it? That is the glaring weakness of that idea.

The real cure to this, in my opinion, is to trust the raters and their ratings. NOT as being correct or to be agreed with, but as having given an honest and honorable opinion. After all, isn't that really what is at the heart of this game we all love and played over the courses being ranked?

Time to step down from the soap box...


Jim Nugent

Re: GOLF DIGEST by the numbers
« Reply #23 on: April 03, 2009, 06:44:25 AM »
Tom

But doesn't Golf Magazine rig the deck, too?  My reading of how they do it is that the position of the course in an individual's list receives a weighting.  Top 3 get a weighting of 100, 4-10 of 85, etc.  Who decides that a top 3 listing is 15 "units" better than 4-10?  And how did they come up with that?  What's the rationale that explains why #3 on a list is, what, 17.6 percent more important than #4?

And anyway we judge a tree by the fruit it bears:

Top 5, GD
Augusta
Pine Valley
Shinnecock
Cypress
Oakmont

Top 5, GM
Pine Valley
Cypress
Augusta
Pebble
Shinnecock

Looks to me like both trees produce apples.  Or something like apples...



I see lots of posts focusing on the differences between GM and GD.  But the lists are mostly the same, at least on top.  The top 7 on GD are also top 7 on GM, just in slightly different order.  15 of GD's top 20 are in GM's top 20...22 of 30...31 of 40.  i.e. the two lists have 75% of same courses in their top 10, 20, 30 and 40.  Like Mark says, apples and apples.  

Golf Magazine's system is fatally flawed.  It requires people to rate courses they never played.  Only someone who has played all the courses that are legit candidates can say which are the top 3 courses in the country...or the top 15 or 25 or whatever.  How many people have done that?  None or virtually none of the actual raters.  

Mark Smolens

  • Karma: +0/-0
Re: GOLF DIGEST by the numbers
« Reply #24 on: April 03, 2009, 09:45:28 AM »
How can you "boot the guys who are at odds with the majority on many courses?"  If, for example, I enjoyed Spanish Bay much more than Spyglass -- but for the opening holes -- is my ability to "rate" a course fatally flawed since that opinion is apparently held by so few others?  If I think that Lost Dunes is by far the best course in SW Michigan, must I be cast aside by the great majority of my friends and acquaintances who don't like the "extreme" greens and find Point O Woods more to their liking?

Lists of ratings are a compilation of the subjective beliefs of a large collection of individuals, all of whom share one thing in common -- the ability and/or desire to play golf courses.  Why can't we look at these collections of "data" as a way to foment discussion (which clearly they do in places such as gca), and not get so exited about them?  Courses -- whether it's members or owners, or both -- that trumpet positions on these lists are foolish, but so what?  Does anyone here really think that Rupert Neal's losing any sleep over the fact that Golf Digest's raters apparently don't think as much of his place as I do, or as most of the people who participate here do?  Based upon my one evening over dinner with Mr. Neal, I very much doubt it.

As for you Mr. Colton, the solution to your RPI vs. the Committee dilemma is clear and simple.  Add one more weekend to the tournament, and let every team in -- just as most states do for high schools.  The regular season results are used for seeding and home courts on the first weekend.  EVERYBODY gets a cut of the CBS/ESPN $$, so there's less incentive for Calhoun and his ilk to cheat in recruiting and such.  And no more beefing because Northwestern got left out (oh wait, nobody was beefing about that -- wait till next year).  GO Cats!