Stupid idea: do any rating systems employ a "100 rounds" approach? You're given a list of, say, 200 courses. You have to allocate 100 rounds of play across the list, however you want. You play a course, then decide how many rounds you'd like to play out of 100. Zero is an option.
By forcing the choices, you don't have to come up with a definition of greatness or even of "this is a top 10." If you play a course you feel you'd like to play 10 rounds out of every 100, you have to take the rounds from the other courses. (Assuming you've already doled out the 100 rounds.)
Also, forced-choice would do a better job of showing just how much better the best courses are than everything else; some people might put 30, 40, even 50 on just one course.
Aggregating "scores" across a team of raters, you could normalize the rounds to a denominator of 100.
Graphically, you could ditch the ordinal system everyone seems wrongly to attach to and use a cardinal graphic like a bar chart that illustrated the scores of each course in comparison with one another.