The results for the latest Ebitengine game jam just came out and this situation messed up the first place. Average ratings per game: 11.8. median: 13.0. First place got 13 ratings and an overall score of 4.323, while second place got a normalized score of 4.179 with 12 ratings (above the average!), which is 4.350 raw, which would have left them in first place. Both are awesome games, but this really feels bad. 17 entries, very high average ratings and median and very well balanced ratings across the board, yet the normalization can't be avoided. And there were two participants for that entry, so if both voted, then... are they effectively forcing themselves to have their ratings normalized by inflating the otherwise extremely tight range of ratings for the other games?
Doesn't look like itch.io offers any mechanism to make things more fair in these contexts. I'd personally love to see a combination of judge and participant ratings, so whenever an entry has fewer ratings, the judge rates can be used to compensate. But the current situation makes jams not fun for small communities.
Viewing post in How can I turn off score weighting for my jam?
If you have judges available, then you can ask them to rate entries to balance things out. It sounds like you’re specifically asking that judges have higher weighting than regular voters? Might be interesting, but:
Most of the requests in this thread are about making a specific tweak to the order of results. You can use a manually ranked criteria to set the order of the results to whatever you want.
The algorithm we use is the most “fair” from a objective context, I’ve described why I strongly believe this in previous threads.
A few more thoughts here: https://itch.io/post/8205919
more fair
Fair by what standard? With low numbers you will have increments that switch things around. You can't rate a game with fractions.
If those 12 ratings game had 4.350 raw, how does that work out? Because that would be 52.2 stars, which is not possible. Also 4.323 * 13 is 56.2
Anyways that is around 50 stars, so one star difference is about 0.02, and the difference between those numbers is 0.027.
I do not think, I understand those numbers. But I can tell you, that a missing rating should have an effect. Not calculating that effect would be unfair. But how to factor it in is the big question. Not rating a game, and more, not even playing a game is a kind of rating in it self. So having more ratings is a form of success that should be weighted.