Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(1 edit)

Oh wow I posted this years ago but yeah the problem still stands. I entered a jam recently where the most popular entries were topping out at less than 30 votes (niche interest area running a friendly jam). I'm assuming in order for the lower half of the field not to get heavily penalised for having a few less votes than the top half, the people running it actually did their own manual calculations for the results release. It seemed unfair for an entry to slide a few places down in the rankings (and in one case affected who won a category) because of the difference of a few votes. Being a niche area means it is quite hard to get "lots of votes". More interaction can mean more votes which is a good thing, but when the difference can be in the single digits as to where the penalty line is, it's pretty disheartening for games that slide under that barrier. (and let's face it, about half the entries in a comp are going to go that way no matter how interactive everyone is or is not. My game was not one of those affected this time so that's not why I'm complaining, I just feel for some comp types it isn't very fair.)

In theory it's a good idea to stop an unpopular entry with a handful of perfect votes from the dev's friends winning over a more popular and better made one with 200 votes over a range of scores, but it just doesn't work for the smaller jams where if you get 20 votes your score might be unaffected, but if you get 19 you're penalised. An option like Igor suggested or to just work off raw scores rather than calculated ones would be good for those. I'm assuming not an option though considering it's been this way for at least the 3 years since I posted this thread.

I think the most important argument here is, that not voting is counted as a vote of some sorts. People do can see game descriptions and thumbnails without ever playing the game. So those voting players not trying out the game is also to be considered.

It can be argued how exactly it should be weighted, but I have no doubt that it should be done, if one wants to call it "fair". Even for small numbers.

Making people vote on a minimum number of games, is a good idea for various reasons, among it dampening the effect of having your mum vote on your game and your game alone.

And I agree with what leafo said about small jams and vote numbers. Whatever raw votings you would have, would be inaccurate to begin with. It is like measuring the thickness of a sheet of paper with a ruler and only having 5 sheets to measure. You need to do tricks like folding the sheets to have any resemblance of an accurate result.

Admin (4 edits)

Please read my response here as it gives some clarification: https://itch.io/post/8993127

The formula for jam ratings is adjustment = sqrt(min(median, votes_received) / median)

The adjustment for game that got 19 votes in a jam with median of 20 is = 0.97467, about a 2.5% reduction in average score, eg. a 4.5 rating would go to 4.386 for the purpose of comparison during ranking.

(+3)

I don’t understand why the cutoff is at the median. No matter the game jam, 50% of submissions will always receive a score penalty.​ That number might be reasonable for something with 1000s of entries where score manipulation issues may be harder to deal with, but I don’t see why such a large range of submissions should always be pushed down just because they weren’t ranked enough times. Maybe it could be lowered to a 25th percentile for smaller jams - or better yet, selected by the jam host beforehand.

Found this thread while trying to look up the adjusted score formula. This system makes a ton of sense to me, from a statistics angle. Having been on both sides of the median in jams, I understand the “penalty” feeling people are describing though.

This is kind of a silly solution, but it could be effective to simply multiply all of the final adjusted scores by 20 to change the scale to 100 points. The adjusted score effectively becomes a “percentile” rank of sorts. It’s more of a psychological trick than anything, but the difference in scale might mitigate the direct comparison with the raw score that makes some people feel like they got a penalty.