Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

knifefuwu

6
Posts
1
Topics
2
Following
A member registered Mar 17, 2022

Recent community posts

(2 edits)

i have this issue as well, one  method of resolving this is to synchronise your mic gain in honk to the mic gain in obs/ streamlabs. I found that obs (specifically) can desync audio and distort the output audio whilst the microphone sounds fine outside of the application. 

when i test my mic in HONKS application it looks perfectly fine with no phantom talking effect even with typing, but when streaming with my desktop audio it sometimes desyncs or phantom talks in OBS

your mic could  possibly be picking up your keyboard strokes /ambient sound/ desktop game sound which  could be the reason why your model's mouth is being triggered and creating phantom talking. 

the way that i have improved but not eliminated the issue is to reset my audio settings in OBS, whilst raising and lowering the gain in HONK

Phantom talking live example

https://clips.twitch.tv/PlacidExuberantSrirachaTriHard-nZxJw54qsFCYzcgR

ive even had my model lip sync the game audio dialogue and my own mic at the same time, so this might be an issue of microphone devices and audio devices inside of OBS defaulting to a mic or device that is not the one selected in HONK

Hello! 

im a streamer who uses honk, and i just wanted to see how other people set up there characters in HONK in terms of mouth animation 

I would love to see how other people set up there characters in Honk,  I have tried  using the user manual method for honk however I can't get it to work for myself. 

The process I use is to have the mouths merged with the body with no eyes on layer 1 than layer 2 has the eyes separated 

 for blink only. i have tried having the mouths as individual png's added to the layer which contains the visemes layer 1  however this seems to cause me issues with the model glitching between layer 1 and layer 2  I'm on the latest version of HONK! 

For sporks model the( yellow cat): i have 10 viseme mouth sprites with the body as a base, the tail, eyes and spork are separated as those parts have slight rotation animations added to them. This process works well for me and have done it for two Honk sprite png models


Honk character sheet

mouth animation chart for spork

perhaps redesigning the layer manager ui to something that is similiar to what  photoshop and clip studio paint provide. 

most people use these programs to create there png avatars and are  more familiar with the organisation system,  this includes grouping assets into folders which will streamline more complex png models.  Honk's Layer Manger system at the moment requires toggling assets on and off in order to create an emote. for a simple model the layer manager works well. 

i have found when trying to create a 2nd emote for my model, that toggling between emote 1  and emote 2 is more of a guess despite reading the directory's and guides due to the fact that i am cycling between layers and turning them on and off and having assets from emote 1 get mixed in with assets from emote 2. the abilty to lock layers and lock groups would help immensely in creating emotes. 

this is an example from photoshop, clip studio paint and procreate have similar layer manager ui. 

you can see the layer order of the head being above the body, also the grouping in which these assets fall under for the  specific emote. 


photoshop's layer manager

Ive tried this solution! it does work, i agree it's not perfect but it does help considerably

it would be excellent to have a mouth calibration setting, where the user can record a sentence and the calibration process would remember the mouth sounds rather than try and search for it live on stream or in recording this would then cut down lag and dyssynchronization.  the user could edit or key frame the mouth shape with other mouth shapes to get an exact match for the sound. 

Hey Peanut, 

im experiencing the same issue in delay, recording a video for a png animation lip sync seems to sync up better than in stream. 

a Temporary fix would be adding less mouth shapes to your png for the software to speed up and search for less mouths, however you lose most of the complex sounds and the mouth appears to mostly open and just shut. 

I have quite alot of expression and exagerated mouth shapes when i stream and the delay is quite jarring due to the way that the sound is being matched with the correct mouth. when there is no lag  the amount of mouths flow quite organically with the speech but the live calibration of the software searches for the sound and matches the mouth shape which breaks immersion most times. this is also dependent on internet conecvtivity as well it seems

If there was a way to set up a calibration process where the user could record a sentence in HONK! and the mouth shapes would be calibrated and saved before going live which would reduce lag time before going live in a streaming program or recording that would be fantastic.  it would be awesome to see an editing panel where you could input the recording of the sound and see the mouth animation in an editing mode interface and make slight adjustments within editing mode. For instance sometimes my png will pick up sounds of my keyboard and mouse clicks if i could pre-record my keyboard or mouse clicks and set an attribute of  'silent' into the mouth layers that would reduce unwanted lipsync

the E sound viseme is calibrated to more of an EH sound depending on your accent , and native tongue you may need to re-use some mouths to adjust to your accent

here is an example of my png rig on twitch with the same issue 

https://clips.twitch.tv/BashfulHardIguanaMcaT-4-SlWuVsmiVLH_u5?tt_content=url&tt...

here is also an example of where the mouth sprites flow quite nicely 

https://clips.twitch.tv/SavoryBombasticFloofFunRun-o03d9H3zxHkZCAYf?tt_content=u...

Hope this helps improve Honk! as i am quite fond of it