I really appreciate your response, but my original points still stand.
The design goal of the current algorithm does not include that you should individually have direct control over if your score adjusted or not. Just like you can’t directly control how many people how many people decide to rate your game. You can only influence how visible your game is by how much you participate in rating & comments.
The goal is to rank each project relative to every other project. As an individual participating in a ranked jam you should be prepared to accept that there may be entries that organically are more well received than yours, with a high average and number of ratings.
I full understand your point about the aliasing that happens around the median. This aliasing still happens even with your suggestion, but will happen at a different spot and, as I mentioned, increase the odds of the ‘fluke’ scenario: a game with 6 votes ranking above a game with 200 votes (since generally medians are pretty low). By lowering the bar you diminish the reward that the proven submissions get (those with high number of ratings & score).
I also want to note, since I don’t know if you are even familiar with the algorithm, but the score adjustment is avg_score * sqrt(min(median, votes_received) / median)
. Note the sqrt
which effectively reduces the slope of the penalty for those near the median, trying to mitigate some of the aliasing issue. (One experiment worth exploring might be changing the denominator of the exponent)
Another thing to note is the nature of games jams, especially apparent in larger ones. There are quite a few broken, incomplete, partially made, or just low quality entries. Many people will skip voting these depending on the circumstances. The median can map over this curve of completion for entries. There’s also the group of people who don’t participate in voting so their games get restricted exposure.
I can generally confidently suggest to anyone that if they put in a genuine effort into their game (including its presentation) and put in a genuine effort into rating entries they’re unlikely to get caught in the adjusted group since the median work required is lower than they might think.
(It’s possible your intuition is mixing up average effort with median effort for number of ratings. The average number of ratings is always larger than the median. Your concerns are based around the scenario when median == average, which doesn’t really happen)
Also, one last thing, if a jam host ever reaches out and they have specific goals in mind with how voting and ranking works, I’m happy to work with them.
Lastly, if you’re curious to experiment, here are some ranked jams you can look through:
https://itch.io/jam/brackeys-4/results
https://itch.io/jam/cgj/results
https://itch.io/jam/brackeys-3/results
https://itch.io/jam/lowrezjam-2020/results
https://itch.io/jam/igmc2018/results
https://itch.io/jam/united-game-jam-2020/results
https://itch.io/jam/nokiajam3/results