Wayne,
Let's think about the case of a guy who rates Aronomink 50th (I know they don't list courses from 1 to 100, but we'll assume the rating he assigned would rank it there if everyone agreed with him) He sees it is 51st when the ratings come out and that's fine with it as he mostly agrees the courses rated higher are better and the courses rated lower are not as good. Next time he sees it is out of the rankings entirely. He thinks that's wrong, so the time after that he adjusts his ratings to effectively rank it 30th to compensate for the other raters being wrong in his mind.
Likewise, there could have been some guy who rates it 80th. He sees its 51st when the ratings come out, so possibly he leaves it off his ratings next time, thinking that if everyone else is overrating it he'll adjust things to make up for that.
Unless the population of the raters is changing a lot every couple years, it is either this or there is no consistency in what numbers raters come up with. If you played it once 10 years ago, and you don't keep any notes to refer to or even look at the values you assigned it last time then your own ratings will fluctuate a lot. Maybe you review the ratings from last time to refresh your memory and the absence of Aronomink stands out to you. You don't even consciously try to rate it higher, but just having it in your mind recalls memories of your experiences playing it which unconsciously gives it a higher rating from you this time. These guys are human, there's no objective way to do this and those types of things will happen.
I wouldn't be surprised if there was some lobbying that occured from some of the members who didn't like when their club dropped off the list, so they called all their friends who they know are raters to ask them to remember Aronomink this time. Not saying there is necessarily anything untoward going on, if all they said was to remind them of a round they played together a few years back so the course is fresh in the mind of the rater that's all it would take to pump the numbers back up again.
I think Matt Ward's idea of "super raters" might have some merit. Given the choice between trusting the opinion of 1000 guys who have played some of the courses on the top 100 but on average probably playing well under half of them, versus the opinion of 20 guys who have played almost all of them, I'd go with the latter. I know a lot of guys in GCA are raters and would hate to be kicked out of the party, so there's a lot of people against this viewpoint, but I think he's right. I realize he's exactly the kind of guy who might merit selection as a super rater so he may have his biases as well, but that's no reason to reject his idea out of hand.
You could extend the super rater idea to state rankings as well. The set of people who have played the top 30 in Iowa or Maine isn't necessarily going to have a lot of overlap between those playing all the courses of note nationwide. But that doesn't matter because no one really cares how the 10th best in Iowa compares to the 10th best in Maine or the 100th best in California. But it would be useful for the national super-raters if for instance a new course in Iowa was rated higher than The Harvester, they might think it merits a visit, along with revisiting The Harvester, to see if this course deserves mention nationally or if the The Harvester has lost something and deserves a demotion.