Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Formamorph

Every choice transforms your body and shapes your adventure · By FieryLion

Connection Issues Sticky

A topic by FieryLion created 8 days ago Views: 1,132 Replies: 34
Viewing posts 1 to 8
Developer

Issues with networking such as failed to make AI requests goes here, make sure to share your current settings such as AI URL endpoint, model name, and provider name (such as OpenRouter).

I was trying to make a custom AI input so it could actually run.  But I keep getting a response like "Failed to parse AI response" and an error in the top right saying failed to complete action.  I have tried both in browser and as a separate download with the game and get the same response.  I thought I followed the instructions, but I might have messed something up.    

Developer

did you try mistral 7b instruct? I haven’t tested on the small 24b one yet, not sure if it knows how to respond correctly (JSON format and such)

Just tried it, and got the same response.  Is there something on the open router site im missing?  I simply made a key by going to the chosen model and making an API key.  

Developer

hmm I think you’re supposed to make the OpenRouter API key first

(1 edit)

Deleted the previous keys and made new one, this time without going into a model just making it from where the keys are, reset the AI in the game and re-pasted the stuff and got the same response,

Developer

do a quick test and try a different world

Hmm, it did work with a different world.  Tried the slime one and it started, had to try starting a few times but it worked.

Developer

I recommend starting a new save, it could be the new model wasn’t used to the old save format

(1 edit)

I'm not sure exactly what to adjust; I keep getting the "failed to complete action" over and over, web version or the downloaded version. I've tried a few different models and changed the api key twice now, as well as tweaking the max memory and output tokens :/

Developer

don’t use deepseek for this, the free API has severe rate limits and also its super censored, it probably won’t let you role play. Use a different model, I recommend mistral models

roger, I'll fiddle around with one from that series 👍

(1 edit)

Using kobold on the local port, i get 'Failed to complete action. Please try again.' immediately. Am i using the wrong model or something?  Is there something i need to do in the UI?

Developer

your endpoint needs to be chat completion

http://localhost:5001/api/v1/chat/completions

I'm using the mistralai/mistral-7b-instruct:free model by  the way

Developer

7B is small so it may fail at writing valid JSON, you may need a few tries.

I've tried running it like twenty times and it hasen't worked once

Developer

Veilwood is a pretty large world so the small model might just gave up :c

you can try the second model I suggested in the guide

Keep getting  "failed to complete action. Please try again.

Developer (3 edits)

can you refresh the page (or if download, re-download the game)? The new version’s error message is more informative. Also try mistral instead of mixtral

(2 edits)

Tried both online and download still shows this


Developer

I see the issue, chub.ai responded in a different format, I need to update the game next patch to take it into account. For now you can either use a different AI provider or just use the default AI settings

No biggie, take your time, i had a hunch it was going to give me issues when i saw the model name was  a different format, guess the differences didnt stop there

for some reason the game can't complete actions in my game at the moment. Yesterday it had the same issues at a similar time. Does it maybe have to with too many people using the same endpoint at once?

Developer

everything looks normal on my end. What error code are you getting?

(1 edit)

this is all it says

Developer

did you use your own AI endpoint?

no, i used https://openrouter.ai/api/v1/chat/completions because i read your guide and thought API URL is the same as Endpoint URL

Developer

what model? most models can’t run the game sadly, and the mistral 7b model I suggested in the guide may not work with larger worlds

I used 'meta-llama/llama-3.3-70b-instruct:free' 

the strange thing is that it ran completly fine until 2 hours ago

Developer (1 edit)

try a different world and see if it works, also I forgot to mention the base llama instruct model is censored it won’t do explicit stuff

these are my settings

i tried your 4 example worlds. none of them work. Also the llama instruct model didn't censore things for me for some reason.


Developer (1 edit)

I tried just now and it worked fine, maybe you hit the daily free model limit, I’m not sure if they have a such a limit tho

trying to use ollama to local host, and keep getting "Failed to complete action. Please try again."

(things of note: i have no idea what im doing, and im cheap XD )

Developer

I don’t think you need to put .Q4_K_M, just use the original model name. Also if that doesn’t work I recommend switching to LMStudio (latest beta) or maybe koboldcpp. I’ve heard many complaints that ollama API doesn’t work.