Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Official clarification on final LocJAM6 results needed

A topic by STAR WORDS created Apr 29, 2024 Views: 319 Replies: 5
Viewing posts 1 to 4

Hello.
Are the final results of the LocJAM6 the ones that can be seen by clicking on "results"?
Or are those just the usual rankings given by other participants?
If so, could we have an official word about the system you used to evaluate the various submissions?
For example, grammar quality checks, typos checks etc., forbidden and allowed methods/tools etc.
Since this is our first LocJAM6 entry, we do not know exactly how things work and we would really appreciate an answer.
Thanks. 

(1 edit) (-2)

Hi there

As you rightly say, the results are simply dictated by the votes given by the other participants.

I had a protracted discussion about it here but -long story short: results, ranking and score are a bit of a misnomer.

The whole "voting" phase is just a bit of extra fun where participants get to hang out and recommend entries they like. In turn, such entries will appear higher in the list, as a way to suggest them to casual visitors. (Which in turn tweaks the main page ranking, which is sorted by popularity/views).

So to answer your question, there was really no method or evaluation or parameters, it's just a casual recommendation system not much more intellectual  than a Facebook thumbs-up.

As discussed in the other thread, most people seem to enjoy it for the little bit of fun it is (we have 20+ entries in first place as ex-equo with two/three votes each after all).

But both you and one Russian team seem very concerned about it, so we're re-evaluating for the future.

I believe the contest needs a way to reward originality and creativity. This seemed a fun way to achieve it, but I realize the whole idea of a ranking (no matter how loose) is uncomfortable for some.

Probably I will replace it by recommendations made directly by myself as a jury of one, picking the 10-odd entries that look more striking or collected more votes (regardless of their language).

I don't like the lack of transparency that entails, but it seems worth exploring instead of giving no recommendations at all.

Thanks again for joining the contest and have a lovely day 😄

(1 edit) (+1)

> it's just a casual recommendation system not much more intellectual  than a Facebook thumbs-up.

Voting anything than a 5-Star on any entry tanked the entire language bracket, and the ratings overall suffered as a result.

Instead of "5-star votes" I believe it honestly just should've been "likes"/"endorsements"/etc. so there is no rating system period - leave the nuanced deconstruction to the comments system

So no, a Facebook thumbs-up would've been way better and caused way less of a horrendous feeling when you get 7 people to rate your entry but you can even feel any spite about it because "someone maybe voted us 1 star that's why we only have about 3 stars", but you have an entire 2 pages of 5-star rated projects that all contain 1-2 votes each so they top score on the rankings, lol.

If you score based on the NUMBER of votes and not the 5-STAR SYSTEM, this would be a much better fit for nicher communities and mean the votes are endorsed rather than harshly criticized by an anonymous 5-star system when there may not even be that many contestants in a language bracket to begin with.

I apologize if I come across as blunt and harsh on you but I do believe this ranking system was a huge misfire and I hold you responsible, and leaves a bad taste in my mouth for an otherwise amazing idea and opportunity! If you make another one, please just be transparent about how the entries will be judged, any other system would've been better than this as it completely ran against your ideals of a collaborative atmosphere! It shouldn't feel directly taboo and horrible to rate any other entry less than "5 stars".

(-1)

For the last time, I like the voting system, in all its chaotic randomness.
I have fun with it, most people have fun with it, and it has been present since we started again here on Itch.
Is it wonky? Of course, but nobody ever raised concerns about it before.
This said, this doesn't make your concerns any less valid.
As the number of participants grew, we found out that an harmless bit of fun does can affect a minory especially hard.
Duly noted, I already detailed how I plan to tweak things moving further.
Searching for that balance that makes the event fun for everyone: from me, to you, to everybody else.
Have a lovely day 😄

Deleted 237 days ago
(1 edit) (-3)

There is no right or wrong here, just what people found interesting.

Maybe people found the idea of bad machine translation intriguing enough to vote. Maybe those submissions you mention had some charm that made them popular despite their limititations. Your guess is as good as mine.

I understand your worked hard on your entry and you want what feels "right".

But picking a translation over another is fundamentally dictacted by taste, which is subjective, and thus rarely "right" in the first place.

And framing the contest as a competition would create a toxic tension between the different teams, damaging collaboration which is one of the three goals of the jam.

And finally because the other two goals of the jam are creativity and discovery. Giving visibility to odd, unexpected, original entries is at the core of the whole event.

So to loop bacl to what I said before, voting went exactly as I expected, surfacing entries that aren't necessarily The Best of the Best but more humbly something intriguing,  worth discussing or simply funny..

This said, I see that the "voting" framework created tensions and false expectations in some. Most participants do not seem to mind, but I'll make sure to study something better for the future.

After all, we're all here for having fun.

(+3)

I'm not sure what is the Russian team that's concerned about results, but that's what I wanted to post about, so I guess it fits. The situation is worth the attention both of all Russian teams (who may feel sad about their ratings) and the organizers.

It looks like someone rated every Russian entry 1 star, likely without playing. Maybe on every but one, I didn't check everything. I hope the organizers can see the ratings in detail to check.

  • Look at the top rated Russian entry. It has 6 ratings and 4.33 average score. You can get this result with five 5-star ratings and one 1-star rating (26/6 = 4.33).
    • Yes, I understand you can get that result with two fives and four fours. But I genuinely believe that version is one of the best, if not best one I played. It deserves five 5's out of give more than four 4's out of six.
    • It also generally tracks for other entries, even though we can see only the average and the number of votes. Even in dubious cases: I made a mistake of doing everything by myself and missed the meaning of a couple of phrases, so I think the rating of 4 is exactly what I deserve. But I have 3 with 3 ratings. It could be 3+3+3 or 2+3+4, but given the other evidence, I find 4+4+1 more likely.
  • I believe, on the first or second day of voting (I don't remember for sure, but I only played a couple of entries then) all the Russian versions stopped being listed on the "Entries requiring ratings" page. At that time I thought someone rated everything 5 for the effort :/