Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

I am curious how this works. Do you have any links to papers or something naming and explaining the techniques?
Looking through the code I guessed that the shaders just render the images with lighting using the generated normal, spec, POM, maps? Are the calculations actually all done in `fshader.glsl`? or is there some CPU image processing somewhere else? I'll keep digging.
I wanted to try applying it to image masking or see if there's a way to do something like it in real time with a shader on the GPU.

Hi!

fshader.glsl only does the rendering. The maps are calculated in the CPU using cimg image processing library. All the code is in image_procecssor.cpp. For the normals, is basically obtaining a heightmap from a distance transform, and the grayscale version of the image, and then using derivative opperators to get the normals.

Everything could be done in shaders (the most difficult part would be the distance transform, which i guess you could do with SDF?). I tried in the past to migrate everything to shaders, but the changes were too much and i didnt have the time. I also never sorted out how to do cli exports without a window if they depend on OpenGL, so CPU was better for that.

I suggest you keep it in CPU because it is not intended for real-time but to export. If you achieve a GPU version my personal preference is have the possibility to switch between GPU and CPU. Because it can give different results, so I prefer to have the ability to choose.

Yeah, thanks for the suggestion. I don't have any plan to change the generation for the time beign to GPU, so it will still be done in CPU in the future.