Thanks for playing! We'll get around to posting the repo eventually (gotta double-check a possible licensing issue on some of the audio assets), but the short answer is we used Google MediaPipe, which provides a facial landmark recognition model that runs locally in the browser. And then just communicated blink events to Godot via the Godot-JS API. And in ink, I just made choices with the text "BLINK/UNBLINK" which get selected automatically when you close/open your eyes.