Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Jam ratings calculation issue

A topic by makscee created Jun 24, 2021 Views: 3,438 Replies: 15
Viewing posts 1 to 4
(+1)

I recently participated in GMTK 2021. I just saw my ratings, and the actual score of each criterion was lowered by 1-1.5 points.

https://itch.io/jam/gmtk-2021/rate/1084211

Apparently, your score is lowered if the number of ratings is lower than the median amount. I get why that's a rule, you don't want a game with two 5-star ratings to just win. But it creates another issue. Now, after that correction, the rating of my game might be lowered for reasons which have nothing to do with how good the game is! It makes the rating lose objectivity. All I really want from a jam is to get feedback on how good my game was compared to other participants, and I can't do that if I didn't get enough ratings. Also, I can't even really know how many ratings I need, there's no warning that my amount of ratings is lower than the median.

Possible solutions that I see:

  • Show rankings of the raw score. It would almost erase the issue for me personally, as all I want is the objective information on my rankings.
  • Put a hard limit on the needed amount of ratings. This can be calculated from the number of submissions for example or can be set as an option by jam organizer.
  • Show a warning that currently your amount of ratings is lower than the median.

Also, I really don't get why you would choose to take median and not 10th percentile for example. It literally makes the lower half of participants by popularity lose their scores.

Admin (7 edits) (+1)

Now, after that correction, the rating of my game might be lowered for reasons which have nothing to do with how good the game is! It makes the rating lose objectivity.

It looks like your project got 8 ratings, where the average number of rating across the jam was 25.9. That low number of ratings would not be conclusive enough to calculate a fair score.

Show rankings of the raw score. It would almost erase the issue for me personally, as all I want is the objective information on my rankings.

The average rating for projects that receive a low number of ratings will have very high variance and poorly represent how participants felt about your game, so we do not show a ranking for that score. It would be unfair.

Put a hard limit on the needed amount of ratings. This can be calculated from the number of submissions for example or can be set as an option by jam organizer.

That might be worth adding, but in this case you would still have too low number of ratings to make a difference based on how we would recommend hosts to configure the jam.

Also, I really don’t get why you would choose to take median and not 10th percentile for example. It literally makes the lower half of participants by popularity lose their scores.

10th percentile would be way too lenient, and allow for games with high variance in their rating overtaking projects that have a more accurate rating due to the larger number of ratings.

The system is built this way to encourage you to participate in the rating process. If you want to receive ratings to be eligible for a more accurate and non-penalized final score then you will need to rate other people’s work and leave constructive comments on their submission pages. That will allow your project to show up on the “most karma” sort and also people who see your comment will be linked submission to get a chance to play and rate your project back.

Hope that helps!

(+1)
The average rating for projects that receive a low number of ratings will have very high variance and poorly represent how participants felt about your game, so we do not show a ranking for that score. It would be unfair.

How is that unfair to just show the information? I don't think it's fairer to negatively compensate dispersion, dispersion works both ways, and in many cases, you would lower a score that was already too low from not enough ratings.

That might be worth adding, but in this case you would still have too low number of ratings to make a difference based on how we would recommend hosts to configure the jam.

I never said that my 8 ratings should be enough, but I should at least get some warning that I should try to get more ratings and how many more. Will you add this feature?

10th percentile would be way too lenient, and allow for games with high variance in their rating overtaking projects that have a more accurate rating due to the larger number of ratings.
The system is built this way to encourage you to participate in the rating process. If you want to receive ratings to be eligible for a more accurate and non-penalized final score then you will need to rate other people’s work and leave constructive comments on their submission pages. That will allow your project to show up on the “most karma” sort and also people who see your comment will be linked submission to get a chance to play and rate your project back.

I see the intention, but in this particular solution, wouldn't you punish half the people even if everyone got 100 or more ratings? Median will always shift if more people get more ratings and there seem to be no way to get zero people penalized. Also, even if I do get enough ratings to pass the median, I still know that half the people got their scores lowered and my game is compared to them not objectively.

I still think that Ludum Dare has the best solution for this problem. Everyone can get 20 ratings and they are forced to play and rate games of other people. Also, everyone gets a very clear warning that if they have lower than 20 ratings, they will not be ranked. In their case, every participant can get a fair score and be ranked among others, not just half the participants. I see no reason to leave this median solution, it creates more problems than it solves. You literally make really interesting rating data lose its objectivity :(

Admin (3 edits) (+1)

How is that unfair to just show the information?

Without any other changes to how we handle ratings, it’s unfair because the people who would get the top ranking would effectively be random (most likely an entry with 1 vote at 5 stars that was otherwise ignored by others). Additionally, a popular and highly rated game will never be able to hold that slot, because maintaining a perfect 5 star average as the number of ratings goes up is effectively impossible.

I see the intention, but in this particular solution, wouldn’t you punish half the people even if everyone got 100 or more ratings

You have to keep in mind we host all kinds of different jams, including jams that often do even have enough participants to hit 20 ratings on a single entry. The median system scales across jams of all sizes, small or large, automatically. It will work regardless of whether participants are motivated to rate projects or not. Ludum Dare has the advantage where they generally know how big the jam will be and how many people will be voting on entries, so they can choose a value up front that represents how people will be voting. (You also mentioned Ludum Dare’s score being “fair,” but even that is subjective)

I know at face value penalizing the score of half the entries seems strange, but think about what the jam system is supposed to do: rank every entry relative to every other entry using a limited and fragmented collection of votes. If an entry can obtain above the median votes while maintaining a high average, that it’s evidence that the participants of the jam do like it, so it should be promoted.

Consider this example: how would you order a game that got 20 votes at a 5 star average, and a game that got 200 votes at 4.99. (Now consider other combinations where the numbers are slightly adjusted, not exactly a simple problem after all). The number of ratings is a confidence value used against the average score. It might be helpful to think of your final score as a relative score to everyone else’s entry. We still provide you with the raw score for your own information.

Ranking games is always subjective and score are arbitrary, though, which is why we also let people host jams without any kind of rating enabled at all.

I never said that my 8 ratings should be enough, but I should at least get some warning that I should try to get more ratings and how many more. Will you add this feature?

Possibly, but as the median is a moving target it may not be a good approach. In this case, from what I can see, you rated 5 entries. My (rhetorical) question to you is why did you rate only a few number of entries despite being concerned about getting a full score? I’m guessing the issue here is that you were surprised about this system after the fact, and you weren’t aware going in. Perhaps if this was communicated better up front you would have changed how you approached the jam.

I still don't see any arguments against the solution where organizers could choose the limit of votes themself. I would be happy with this solution, organizers would get an option to go from this median system, you could implement a warning for participants. As you said this problem is not easy to solve but seems to me that you've chosen an easy to implement, scalable solution, which is fine, just give people the option to not use it and avoid some problems that it creates. Put the responsibility on organizers, because a generalized solution will never fit everyone perfectly. We can talk all day about all the edge cases of each approach, but I think I explained all my arguments, as well as you. So, I hope you will decide what's best, I just wanted to explain my concerns, because this thing sort of ruined the jam for me in a way. Of course, I will try to have as many ratings as possible in the future. I just really think there's room for some easy improvement for everyone.

Thank you!

(+2)

How about keeping the median as a point of reference, but making the no-penalty threshold slightly lower?
For example, 80% of median (rounded up) as opposed to 100% of median. A game getting 8 votes shouldn't have much higher variance compared to a game with 10 votes, likewise a game with 80 votes rather than with 100 votes (compare it to e.g. 2 votes vs 10 votes). Getting number of votes below that threshold decreases the rating like now, but with the 80% of median as a point of reference.

With that approach, it's perfectly feasible to achieve 100% of entries without the penalty, as long as the lower half of most-voted entries stays in the 80%-100% of median range. As it is now, the entry must either be in the top half of most voted entries or get the exact same number of votes as the median in order not to get the penalty.
Also, showing the median and no-penalty threshold during the voting - and which entries have how many votes - would allow participants and voters to make an informed decision about how much votes need to be gathered/cast to keep oneself/others out of the penalty zone without overcommitting to the jam.

By still using the median as a point of reference the system keeps its scalability. By lowering the threshold to 80% the system doesn't penalise nearly all entries in the lower half of most-voted entries, and isn't so sensitive to the median changing by just 1 vote. Finally, by keeping the threshold around 80% rather than 50% or 20% of median we can still keep the ratings variance comparable between the least-voted non-penalised entry and the median-voted entry.

Thoughts?

(+1)

Really like your idea! My main concern was that there's no way to get all 100% of people to not be penalized, which is just absurd. And your solution only requires changing the formula, which is very simple to implement too.

@leafo what do you think?

Admin (8 edits) (+2)

My main concern was that there’s no way to get all 100% of people to not be penalized, which is just absurd.

I already explained why it is not absurd. The goal here isn’t to avoid penalizing entries. The goal is to rate everything relative to everything else. If you’re concerned about the penalized number, we also display the un-adjusted scores. (This formula could be written in inverted where the upper half is boosted instead of the bottom half being penalized, the end effect the same)

Although this change is an interesting idea, I would have to think a lot more about the edge cases. Keep in mind, for someone’s entry to go up in position, another person’s goes down (And specifically in this case, your desire for your entry to rank higher will result in the decrease of others). The variance in the “low number of ratings” entries is quite high, so it’s hard to conclude by intuition if this is more fair. I would need to do an analysis of how this impacts ratings on different jams to get a feeling and work from there. (Keep in mind, we would never recalculate the rankings on an existing jam). So although a formula may be simple to write or adjust, the impact is substantial and requires a lot of thought.

My honest suggestion about your particular situation is: participate in rating games.

Your rank got penalized essentially because people were unable to discover your game to rate because you barely participated in rating entries yourself, therefore it received a few number of ratings. A highly ranked game is one that has both many ratings and a high average score. Even if you got a high average, if you got a low number of ratings then it makes sense that you would rank below those who got more ratings with a similar average. I think most people would agree with this way of ordering.

Getting thousands of entries rated for a jam in a limited time frame is a massive endeavor where everyone who can needs to participate. The reality is that rating games is work, and if you want others to put in the free work on rating your game, the you have to contribute back as well. The system intentionally is designed to encourage people to rate entries so the jam is the best experience possible for as many people as possible.

I hope that makes sense

(1 edit) (+1)

My fix isn't as much meant to ensure that no-one will get their entry punished, but rather that everyone has a reasonable chance of avoiding the punishment with proper effort.

The current system is somewhat volatile in that increasing the median by 1 means that previously median-ranked games become punished. So people are sort of encouraged to get their game ranked significantly above median, which itself eventually leads to increasing the median. Also, the voting/rating other entries isn't 100% efficient - the score depends on the game's own ratings, not the author's votes, and not everyone returns the voting favour. So it further compels people to increase the median. It might help the number of votes but make it more stressful/frustrating for participants (and perhaps make them seek shortcuts by leaving lower quality votes).

My fix is meant to stabilise the system. People might aim for the 80% threshold (or maybe 90%), but then they still end up on that shaky ground. However, those who get their votes at median level are in a comfortable position - their entries can still handle extra 25% of median (e.g. increasing from 16 to 20) and don't need to rev up the median (potentially punishing other entries in process) to make sure they're on the safe side.

If we add to that:

  • a clear information on game's page about a current median, threshold and what it means for the entry
  • a search mode with entries voted below median (not threshold, because median is safer) sorted by author's coolness (to add extra cause-and-effect between voting for other entries and getting own entry voted)

then everyone should be able to safely avoid punishment by putting roughly a median-sized effort. And given how active some voters can get - note that a voter can single-handedly increase the median by 1 by voting for all entries - median-sized effort is by no means insignificant. 

Totally agree with you, nothing more to add. It's really dumb that after learning about how this system works, I was kind of forced to consider just putting random ratings on games since my ratings actually push the median further up and kind of decrease my chances to get past the median if some of the games I rated don't return the favor. And some people will actually do it for sure, especially since you can't even be sure you got past the median!

And leafo, I get what the current solution was here to solve, I really do. What we're trying to tell you is that by some minor changes we can solve the initial issue as well as other problems that we've described.

Admin (10 edits) (+1)

Cheating is always going to be an issue. Please don’t create fake votes, there’s a good chance you will get caught, and you entry will get disqualified. Especially in larger jams, we’re looking more closely at the activity of the voters.

If you have the time to cheat then you also probably have the time to rate games correctly. You don’t need to rate hundreds of games to score well, put in a genuine effort and others will discover your project organically through the entries page & comments.

Or don’t, there’s nothing that says you have to vote on entries. Skip voting if you don’t have time and feel accomplished that you were able to submit. I just feel it’s hard to balance your need of wanting your game to get high visibility in the rankings while not wanting to contribute back, especially when you have so many others investing a lot of energy in doing the work on rating and leaving comments on entries.

I’m sorry you had a bad experience, but that’s not justification to redo the entire ranking and affect the thousands of other participants. A lot of thought was put into how this system works, as I’ve tried to explain here. I understand that it can be sad to see your project not rank where you hoped, but this is user-contributed voting system, so the participants will need to participate.

Admin (8 edits)

I sound like a broken record here, but I want to emphasize that avoiding the score adjustment is not a design goal of this system. The point of the adjustment is to allow entries to be relatively ranked in the bottom half by minimizing the randomness factor by scaling down scores with lower levels of confidence.

Also, keep in mind we’re using the median, not the average, so if a few people end up going all in rating a lot of entries, the median will not be affected. The median is a good representation of how participants of the jam are voting.

Lastly, for higher medians, also understand that increasing the median by 1 will have less of a score adjustment on entries that are around the median. The ratio of ratings received versus median number of ratings is used. (eg. 99/100 is much smaller than 9/10).

By adjusting the algorithm with the focus on reducing the number of people getting a score adjustment you essentially introduce the entropy into the rankings (so I don’t agree that your suggestion would stabilise the system). The main goal of the ranking algorithm is to avoid “fluke” type situations where a submission that wasn’t seen by many happened to get a higher rating, because people who did see it didn’t care, were biased, or something else. Especially in a jam like GMTK, where public ratings are allowed, this is definitely a concern. The secondary goal is to let entries that were both highly scored and got a large number of ratings to rise above in ranking. You may argue this may hide “hidden gem” like submissions (seen by few, but actually very good), but since those types of projects have less confidence in their overall score I think it’s a necessary sacrifice to accomplish the goal ranking every entry relative to every other entry. (For example, on itch.io’s browse pages, “hidden gems” are a good thing, so we use a different algorithm so allow that type of content to surface.)

All that said though, I wasn’t immediately dismissing your idea. I think it’s an interesting suggestion and I would need time to run results from existing jams and observe the kind of impact it has. Just thinking about it off hand I believe it would most likely introduce a higher “fluke” factor for jams that have lower medians. It’s hard to intuitively reason about how Median * 80% compares to something like 40th percentile for the adjustment cutoff without actually running the numbers to see how it performs.

Regarding your point about communication, I definitely agree that we can add more information to participants that they should participate in ratings games to boost their visibility. In the case of GMTK I don’t remember off hand how the host communicated that to the participants, but I believe most people understood that they should be rating games.

Thanks!

(+2)
I want to emphasize that avoiding the score adjustment is not a design goal of this system. The point of the adjustment is to allow entries to be relatively ranked in the bottom half by minimizing the randomness factor by scaling down scores with lower levels of confidence.

It is not, but I believe enforcing the score adjustment isn't a design goal of this system, either.

The problem with the current system is that even if everyone puts at least close-to-median-sized effort, they still might get their score adjusted semi-randomly, with some entries getting one or two ratings below a median of, say, 20 (just like rolling a 6-sided die 60 times doesn't mean all numbers will appear exactly 10 times). It can lead to a somewhat ironic situation, where the system designed to minimise the randomness factor introduces another randomness factor (i.e. which entry ends up with an adjusted score and which won't). After all, using median means that - excluding entries with exact median number of votes - the lower-voted half of entries will get its scores lowered no matter what.
Also, while the median increasing by 1 might not be significant with a median of 100 votes, the score adjustment might be more significant with a median of 20. And considering median depends on how many games can people play within voting time (as opposed to number of entries), I'd wager getting something like 10-20 median across 200 entries wouldn't be all that unusual. With medians this low, the randomness factor of score adjustment becomes particularly prominent - possibly even moreso than the few-votes variance it's designed to minimise.

Another randomness factor comes from the indirect relationship of giving-receiving - some people might get lucky and get 100% of reciprocal votes, while others might often give their feedback to people who aren't interested in voting at all. Not sure if itch.io more prominently displays entries with higher "coolness rating" (i.e. how much feedback the author gave vs how many votes their entry received); it would definitely add a stronger cause-effect in the giving-receiving relationship.
On the other hand, I imagine public voting would add some extra randomness to giving-receiving, because there's no way to vote on a public voter's entry in hopes of receiving a reciprocal vote. I suppose public voting shifts relevance away from feedback-giving to self-promotion (the higher the proportion of public voters, the more self-promotion becomes relevant compared to feedback-giving). Not really calling to remove public voting altogether, rather pointing out another reason why voting for other entries might not always be the most effective nor reliable method of getting past the median threshold.

I do not advocate for 100% entries avoiding score adjustment most of the time. I do, however, believe that if I take my time to cast a median amount of votes, I should reliably be able to avoid score adjustment (say, 95%+ of the time). Thus, among the numbers-checking for the previous Jams, it might be worth finding out how votes given correlate with votes received. In particular, how much of median I'd be guaranteed to receive 95%+ of the time if I voted on median number of entries. This could give a more fitting median multiplier than my feeling-in-the-gut 80% I initially proposed (assuming my proposal would be implemented in the first place).

Admin (3 edits)

I really appreciate your response, but my original points still stand.

The design goal of the current algorithm does not include that you should individually have direct control over if your score adjusted or not. Just like you can’t directly control how many people how many people decide to rate your game. You can only influence how visible your game is by how much you participate in rating & comments.

The goal is to rank each project relative to every other project. As an individual participating in a ranked jam you should be prepared to accept that there may be entries that organically are more well received than yours, with a high average and number of ratings.

I full understand your point about the aliasing that happens around the median. This aliasing still happens even with your suggestion, but will happen at a different spot and, as I mentioned, increase the odds of the ‘fluke’ scenario: a game with 6 votes ranking above a game with 200 votes (since generally medians are pretty low). By lowering the bar you diminish the reward that the proven submissions get (those with high number of ratings & score).

I also want to note, since I don’t know if you are even familiar with the algorithm, but the score adjustment is avg_score * sqrt(min(median, votes_received) / median). Note the sqrt which effectively reduces the slope of the penalty for those near the median, trying to mitigate some of the aliasing issue. (One experiment worth exploring might be changing the denominator of the exponent)

Another thing to note is the nature of games jams, especially apparent in larger ones. There are quite a few broken, incomplete, partially made, or just low quality entries. Many people will skip voting these depending on the circumstances. The median can map over this curve of completion for entries. There’s also the group of people who don’t participate in voting so their games get restricted exposure.

I can generally confidently suggest to anyone that if they put in a genuine effort into their game (including its presentation) and put in a genuine effort into rating entries they’re unlikely to get caught in the adjusted group since the median work required is lower than they might think.

(It’s possible your intuition is mixing up average effort with median effort for number of ratings. The average number of ratings is always larger than the median. Your concerns are based around the scenario when median == average, which doesn’t really happen)

Also, one last thing, if a jam host ever reaches out and they have specific goals in mind with how voting and ranking works, I’m happy to work with them.

Lastly, if you’re curious to experiment, here are some ranked jams you can look through:

https://itch.io/jam/brackeys-4/results
https://itch.io/jam/cgj/results
https://itch.io/jam/brackeys-3/results
https://itch.io/jam/lowrezjam-2020/results
https://itch.io/jam/igmc2018/results
https://itch.io/jam/united-game-jam-2020/results
https://itch.io/jam/nokiajam3/results

(+2)

Thank you for your response (and, uh, for other responses so far too ^^).

I understand the point about the design goal of the system, and it being primarily to compare entries against one another. My point about agency likely stems from this emphasized suggestion "participate in rating games" and generally encouraging participation. It's a very valid message, but also made it sound like it's primarily participant's responsibility to get their game above median, rather than a (closer to reality) combination of participant's involvement and out-of-control factors.

I'd like to point out there are two key aspects to the Jam experience:

  • "global" aspect - what kind of games were created and whether top-ranked entries deserve their spots (since it's the top places people are mostly excited about)
  • "individual" aspect - how one's entry performed, both in terms of feedback and ranking

The median measure seems focused on improving the global aspect - making sure that ratings are fairer.
Except it's a finnicky measure, because:

  1. You mention 6-votes entry ranking above 200-votes entry, which I presume is about the 80% (rounded-up) measure of mine; but this implies the median is 7, and 7-votes entry ranking above 200-votes entry doesn't seem like a massive improvement.
  2. In a recent (non-itch) Jam I participated in, there were 25 entries with votes from 19 entrants + 4 more people. Most of them ranked nearly all entries (people couldn't rank their own entry). The median-and-above entries got 20-22 votes, the below-median entries got mostly 18-19 votes (two entries got 14 and 16 votes). Also, one of the 19-voted entries was 5th out of 25, making it a relevant contender.
    With the strict median measure, an entry getting 19 votes would have its score adjusted while 22-votes (most-voted entry) would not. It means that, depending on a situation 19 is deemed too unreliable vs 22, but in another Jam 7 seems reliable enough vs 200. Now, even with my proposal it would be 16-22 vs 6-200 spread, but it goes to show that median system adds extra noise - potentially near top-ranking entries, too -  when all entries are voted-for almost evenly. The difference is that raw median semi-randomly punishes 11 out of 25 entries, while with my adjustment only 1 of 25 entries qualifies for score adjustment - that's one 1 less!

I guess the problem of extreme-voted entries can be tackled two-fold (maybe even both measures at once):

  • Promote the high-ranking (e.g. top 5%) low-voted entries, so that more people will see them and either prove their worth or get them off their high horse. People don't even need to specifically be aware these are near-top entries (especially since temporary score isn't revealed), what matters is that they'll play, rate and verify.
    It's sort of "unfair" for poorer-quality entries, but chances are already stacked against them and it can improve the quality of top rankings by whittling down the number of undeserved all-5-star outliers. And let's face it - who really minds that 6-voted entry with all 2s ranks above a 200-voted entry with mostly 2s and some 1s?
  • More work in this one, but with great potential to improve jam experience - streamline the voting process.
    In that Jam I mentioned, we have a tool called "Jam Player". It's packaged with the ZIP of all games, and from there you can browse the entries, run their executables, write comments, sort entries etc. As the creator of the Jam player I might be blowing my own horn, but before lots of voters played only a fraction of games. Ever since introducing the Jam player, the vast majority of voters play all or nearly all entries, even when the number of entries reach 50 or so (with 80 the split between complete-played and partially-played votes was more even, but still in favour of complete-played).
    I imagine similar tool for integrated voting process could work for itch.io - obviously there are lots of technical challenges between a ZIP-embedded app for a local jam and a tool handling potentially very large Jams, but with itch.io hosting all the Jam games it might be feasible (compare that with Ludum Dare and its free links). With such a player app, same people would play more entries, making the votes distributions more even and thus reliable (say, something like 16/20 vs 220 instead of 6/7 vs 200).
    Perhaps I should write up a thread on the itch.io Jam Player proposal...

The 80% median seeks to improve the individual aspect - making sure it's easier to avoid the disappointment of getting score adjusted on own entry despite one's efforts

If someone cares about not getting their score adjusted and isn't a self-entitled buffoon, they'll do their best to participate and make their entry known. If someone doesn't care, then they won't really mind whether their entry gets score adjusted or not. The question is, how many people care and how many don't.
If less than 50% people care, they'll likely end up in the higher-voted half of entries. Thus, no score adjustment for them, the lower-voted half doesn't care, everything is good.
However, if more than 50% people care, there'll inevitably be some that get in the score-adjusted lower half. E.g. if 70% people cared about score adjustment, then roughly 20% would get score-adjusted despite their efforts not to. The score adjustment might not even be that much numerically, but it still can have a psychological impact like "I failed to participate enough" or "I was wronged by bad luck". I'm pretty sure it would sour the Jam experience, which goes against the notion of "the jam is the best experience possible for as many people as possible". The fact that 60-70% Ludum Dare entries end up above 20 entries threshold, and that 19/25 entrants voted in the Jam I mentioned, I'd expect in a typical jam at least half of participants would care.

Do note that in the example Jam from earlier, 9 of 19 voting entrants would get score-adjusted with 100% median system despite playing and ranking all or near-all entries. Most of that with quality feedback, too, you can hardly participate more than that. Now, I don't know how about you, but if I lost a rank or several to the score-adjustment despite playing, ranking and reviewing all entries - just because someone didn't have time to play my game and its votes count got below median - I'd be quite salty indeed.
With the 80% median system, all voting entrants would pass at the cost of 16 vs 22 variance, which isn't all that great compared to 20 vs 22 variance (the least voted entrant didn't vote).

To sum it up:

  • if the votes count variance is outrageous in the first place (like 6/7 vs 200), then sticking to strict median won't help much
  • if the votes count variance is relatively tame (like 18 vs 22), then using strict median adds more noise than it reduces
  • provided that someone cares about score adjustment and actively participates to avoid it, the very fact of score-adjustment can souring/discouraging, even if the adjustment amount isn't all that much
  • rather than adhering to strict median, the votes variance problem may be better solved by promoting high-ranked low-voted entries (so that they won't be so low-voted anymore) and increasing number-of-votes-per-person by making the voting process smoother (like the Jam Player app; this one is ambitious, though)
  • with more votes-per-person and thus more even distribution of votes, we should be able to afford a leeway in the form of 80% median system

Also, thanks for the links to the historical Jams. Is there some JSON-like API that could fetch the past Jam results (entry, score, adjusted score, number of times entry was voted on) for easier computer processing? Scraping all this information from webpages might be quite time-consuming and transfer-inefficient.

This topic has been auto-archived and can no longer be posted in because there haven't been any posts in a while.