Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
Admin (4 edits)

Please read my response here as it gives some clarification: https://itch.io/post/8993127

The formula for jam ratings is adjustment = sqrt(min(median, votes_received) / median)

The adjustment for game that got 19 votes in a jam with median of 20 is = 0.97467, about a 2.5% reduction in average score, eg. a 4.5 rating would go to 4.386 for the purpose of comparison during ranking.

(+3)

I don’t understand why the cutoff is at the median. No matter the game jam, 50% of submissions will always receive a score penalty.​ That number might be reasonable for something with 1000s of entries where score manipulation issues may be harder to deal with, but I don’t see why such a large range of submissions should always be pushed down just because they weren’t ranked enough times. Maybe it could be lowered to a 25th percentile for smaller jams - or better yet, selected by the jam host beforehand.

Found this thread while trying to look up the adjusted score formula. This system makes a ton of sense to me, from a statistics angle. Having been on both sides of the median in jams, I understand the “penalty” feeling people are describing though.

This is kind of a silly solution, but it could be effective to simply multiply all of the final adjusted scores by 20 to change the scale to 100 points. The adjusted score effectively becomes a “percentile” rank of sorts. It’s more of a psychological trick than anything, but the difference in scale might mitigate the direct comparison with the raw score that makes some people feel like they got a penalty.