Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Thank you for your great work! It is a novel and impressive idea.

I am unfamiliar with SD webui as I do not have a local PC to run it. But technically, I am wondering how this method can use two kinds of image conditions (ControlNet input and inpainting mask). Is the basic inpainting model (like official sd-v1.5-inpainting) works well on the official ControlNet, even if the ControlNet model is not pretrained based on that inpainting model? 

(+1)

An inpainting-mask layer isn't implemented into the script yet but it's planned for an upcoming update so you can do stuff like only modifying areas of the video. As for how ControlNet works on various models, it seems to work on dreamboothed models just fine, so all the ones people download should work. I ran my tests on RealisticVision1.4 just because I like that model in general but I haven't tested if inpainting-specific models do better or worse.

(2 edits)

Thank you for your reply! I guess a branch for the inpainting mask is already used in the current version, as the frame-wise mask is used as p.image_mask input. In addition, I just found that the webui will guess the model type by counting the inputs' dimensions. The program seems to automatically choose the inpainting version as the base model. I am wondering that did you download all checkpoints of RealisticVision1.4, including the inpainting version? Thanks!

(+1)

I do have the inpainting model for RV1.4 and I used that version a couple times, but I generally just used the normal version for the tests and I haven't compared the inpainting vs non-inpainting models yet. As for the inpainting mask I think I misunderstood what you meant. It does use an image mask in order to do the processing, but there isn't currently a way to upload custom masks for the frames themselves and I thought you were asking about that.