My question is basically where is "the line" when it comes to AI use?
Is it against the rules to use a LLM for debugging or optimizing code? Seems like it should be given that LLMs are trained on countless lines of stolen code. But even then, how would you verify without critically analyzing the source code of every entry? A lot of the same logic used to condemn image generators also applies to LLMs but it's just not as convenient to ban those. An artist making a game doesn't have to "git gud" at programming if they can use an LLM to do a huge chunk of the critical thinking and creative problem solving for them. But as a programmer, I still have to learn all about perspective, vanishing points, proportions, color theory, brushes, layers, blend modes, on and on just to have a bare minimum standard of quality and that's the rub.
I hate drawing. I hate it. Every second I spend drawing is like pulling teeth. Visual art is the least enjoyable part of the game dev process for me. I have spent more time trying to draw something I don't hate looking at than anyone could reasonably be expected to. And at this point, I don't even WANT to get good at drawing. I want to spend that time becoming exceptional at something I enjoy like coding and music rather than becoming mediocre at something I hate like drawing. This is why I'm building a typing game for my entry. It requires very little art and animation so I can minimize the time I spend drawing and focus on adding accessibility features and polishing the experience.
So while I get the logic behind banning AI images, I don't really think it's consistent unless it's applied across the board for all uses of generative AI including debugging and optimization. Because from what I read, this jam is all about what YOU can do, not ChatGPT right?