Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Javascript is a headache. And extensions also. With my zero knowledge I came up with this. It will put all the links to games on a page in your log. But it already fails if you have infinity scroll active. So in theory you would grab the links from your library pages, and create a filter css out of it and save that css, and inject it onto the browse pages. You could do filtering with js, but I believe it is better to let the browser handle it with the css mechanism.

var cells = document.getElementsByClassName("game_cell");
for (var i=0;i<cells.length;i+=1) {
    var links = cells[i].getElementsByClassName('title game_link');
    console.log(i, links[0].href)
    }
(+1)

You do not need "game_cell" when parsing for links in collection. to reduce that impact of infinite scroll we can actually, ask users to turn on infinite scroll and then extension would simply block all media for a time, ask users to scroll to bottom and then we can parse "game_grid" for href content. Thus without media content it will be fast as they need no huge files download(previev images) and then, we just gather links in database.

(2 edits)

Now that you mention it, yeah, there is a more direct way. And now I feel stupid for not knowing you can execute js in console directly. I was fixated on the game cell because that is what you need to hide for filtering. I did not understand the method with the game grid. The links are under title game_link ready for grabbing.

For doing it manually, that would already work. Grab the collections, merge the files, format them together in a css and import the css in something like Stylus.

But there is bad news as well. It will not work like this, if you have dynamic loading on (infinity scroll). The css will be ignored for the dynamically loaded content of second page.  {display:none !important} or {opacity:0.07 !important} deals with infity scroll.

@-moz-document url-prefix("https://itch.io/games") { 
div.game_cell:has(a[href="grabbed link"]) { display: none !important}
var uniqueHrefContents = new Set();
var games = document.getElementsByClassName("title game_link");
for (var i=0;i<games.length;i+=1) {
    if (!uniqueHrefContents.has(games[i].href)) {
    uniqueHrefContents.add('div.game_cell:has(a[href="'+games[i].href+'"]) { display: none !important}');
    }
}
var combinedHrefContents = Array.from(uniqueHrefContents).join('\n');
var blob = new Blob([combinedHrefContents], { type: 'text/plain' });
var a = document.createElement('a');
a.href = window.URL.createObjectURL(blob);
a.download = 'unique_href_contents.txt';
a.click();

This does the stringing for the css already, but obviously saved data for appending new items should not include those.

And maybe the "data-game_id" could be used, instead of the link to the game. That looks like it is a counter that is a lower number for old games. But I guess you can not retrieve games by it, only recognise them again for filtering. And that number is way shorter than a link, so css filtering might be slightly faster. For human inspection, the game title or complete link should still be saved though.