Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

for the openai stuff why just make the b ase url customizable so we can use lmstudios server 

# Chat with an intelligent assistant in your terminal from openai import OpenAI # Point to the local server client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio") history = [ {"role": "system", "content": "You are an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful."}, {"role": "user", "content": "Hello, introduce yourself to someone opening this program for the first time. Be concise."}, ] while True: completion = client.chat.completions.create( model="model-identifier", messages=history, temperature=0.7, stream=True, ) new_message = {"role": "assistant", "content": ""} for chunk in completion: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) new_message["content"] += chunk.choices[0].delta.content history.append(new_message) # Uncomment to see chat history # import json # gray_color = "\033[90m" # reset_color = "\033[0m" # print(f"{gray_color}\n{'-'*20} History dump {'-'*20}\n") # print(json.dumps(history, indent=2)) # print(f"\n{'-'*55}\n{reset_color}") print() history.append({"role": "user", "content": input("> ")})
also add support for coqui tts or add support to use use xtts-api-server https://github.com/daswer123/xtts-api-server