Tom Doak,
No, an update took place yesterday, so the current site now reflects data from both panelist and public ballots. (What we don't have quite yet is the index of arrows showing how many spots a course rose or fell since the last update, but I'm expecting that will be added in the next week.) Your statement in the PPS is accurate (panelist ballots receive more weight than the public), though I'm guessing that when George sent that email to friends and acquaintances he was looking for well-traveled people to add to the panelist pool. LINKS raters don't derive access or travel under the magazine's banner, so being a panelist really just means you're reasonably well-traveled, willing to put in an effort on your ballot, and (I hope) willing to participate in the overall conversation from time to time.
As for what Sven describes as "weeding out the anomalies", one of the only "rules" in place is that architects recuse themselves from voting on their own courses, and that supers/Directors of Golf should do the same in terms of voting on their current place of employment. Some guys can't help themselves and do this stuff anyway, but it's easy to spot and neutralize without discarding the rest of an otherwise usable ballot.
The "magic formula" is in place to prevent it from becoming what Joe Tucholski mentioned--a victim of ballot-box stuffing--but we haven't seen any organized efforts in that direction and it has largely been a non-issue so far. There are several ways to determine whether a user has put a modicum of effort into their ballot. I won't reveal all of them, but one common blunder is when a club employee or PR rep ranks their course #1 and then wanders away for good. We actually don't even have to throw these ballots out because they're useless--they don't provide any information as to the courses that their #1 is greater than.
Anyway, the idea is to allow for plenty of natural movement and to register diversity of opinion without compromising the basic credibility of the list. It's true that tension can exist between these two goals, but it can also be lessened by the good-faith efforts of the people involved. I think our original group did a good job, but we're also looking for the next ~100 people to both add useful data to the system, and especially to keep the conversation going--whether that happens here or elsewhere isn't all that relevant to me. Sean Arble was not a member of the original panel, but his ballot has since been scored with that weight, in part because he had the courage to share his list and his ideas on how he created it. Is that kind of cherry picking unscientific? Sure. But Sean's thread was exactly what this is all about, and I'm hoping to see more like it in the future.
We're not screening for conformity, just misinformation. 99% of incoming ballots are entering the statistical model with one weight or the other.