Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Ha ha, I also had the same idea! But I feared an online Gen AI server would cost me a lot.  I think it would be better to make a set of prebuilt meshes. Not letting the user to type on the keyboard but choosing what to generate, can be faster and cheaper.

(1 edit) (+1)

The hope is that someone who has PCVR or a Apple Silicon system with enough VRAM (or at least RAM) might be able to host the model entirely themselves, and not have to pay for a service. (Apple Silicon shares CPU RAM with the GPU so while it won't run nearly as fast as NVIDIA hardware it is better than relying on only the CPU.)

The current `llama-mesh` model isn't ideal in size (~16 GB) and may lack enough  variety of training data for the long term, but it does work now, and it  should be possible to swap out for other  models as they come along without too much extra work. Specifically the Ollama quantizations don't produce as reliable results as the original safetensors now, but it would be a single Ollama command to swap out for a new model later, without change to Genchanted's code since it already supports the API.