Sweet.
XenoCow
Creator of
Recent community posts
That does sound pretty good. Is there another layer you could add on top to detect the visemes? I know that conventional techniques for doing that have been around since at least 2015 since there is a plugin for adobe animator cs6 that can detect the various sounds out of audio. You might want to look in the Vtuber space for something realtime and lightweight.
Thank you.
Sorry, I don't quite understand. If you put the cursor in the text box the control goes to the UI. If you click on the screen the control goes to the game viewport and you can use the mouse to rotate the camera.
I mean that in the original game, I think I remember having to hold down either the right or left mouse button in order to rotate the camera. It took some getting used to compared to most FPS games that, if they need to also use the cursor, have a button to enable the cursor and take control from the rotation rather than how you had it so the cursor was the default and the rotation was secondary. I hope that makes more sense.
Nice progress. Does Whisper run locally or is it an API call to Open AI's servers? If I find a speech to text program for C++, I'll let you know. I am going to be starting a project with some features in common with yours soon(tm).
I know it's not exactly related to speech to text, but in this new version, will there be mouse controls for the camera without having to hold down a button? I remember with the current version it took me a while to get used to holding down a button to look around. Maybe holding down a keyboard key like left alt could enable the cursor and disable looking so you can navigate the UI.
Looks good. Any chance you could add face (camera) tracking too so that she looks at the player? Bonus points if you could do so with a model that incorporates glances away instead of just staring the whole time.
On the TTS specificially, I'm sure it's for computational load and ease of programming that you're using the Microsoft built in system, but have you looked into local TTS models? This game uses one that is pretty convincingly human sounding for what it is: https://jetro30087.itch.io/ai-companion-miku If you can get in contact with the dev, maybe he'll tell you what system he used.
I dried in a crack tied up in a knot.
Man is that hard. I wish I could turn all the way around since there were often times I wanted to go sideways but couldn't quite point in that direction. I'm sure that translating 2D mouse movements to 3D worm movements is not trivial, though. Well done, it's just a pain.
I tried out the demo but it seems that the game requires an internet connection for all the AI stuff. I have a powerful GPU so would like to run the game completely locally if possible.
Does the paid version also require an internet connection? Thanks. I'd very much like to try out all the features of the game but online AI kind of creeps me out. Plus it costs someone to run.
Just finished playing and I think that might be the most immersive VR game I've played to-date. The machines all are responsive and all the buttons clicky and actually do something.
Since much of the game is controlling things via in-game screens, the immersion is deeper since you kind of can only hold so many levels of disbelief at once. So, it feels like the things on the screens are a game, but then you look up and around and you're still in the submarine.
Awesome. I can't wait for the full game!
Phew... That took me a solid hour to beat. There was rhythm involved, but it felt more like I had to become the game to actually progress. The little beat counter at the bottom seemed to be more distracting than helpful, maybe I don't understand its purpose, but it seemed to change each round.
I liked the art style, even though I have no idea what the story is.