Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(2 edits) (+1)

Hopefully you won't run into the problem I did with getting the raymarched voxel rendering to line up with raster rendering (triangles, lines, points).  The raymarched voxel rendering also has to write to and/or read from the z-buffer.  I eventually figured it all out.

In short:  The raymarched voxel rendering shader has to receive both the same view projection matrix and inverse view projection matrix via a uniform buffer object as raster rendering does.  The inverse view projection matrix is used in calculating raymarched voxel rendering ray origin and direction vectors and the view projection is used in calculating z-buffer writes and/or reads based on ray to voxel hit.

If you run into any problems in this area, let me know because I may be able to help you resolve this issue.

Tiny Glade is cool looking and definitely shows what Bevy is capable of!

(1 edit) (+1)

P.S.  Technically, fragment shaders cannot read the z-buffer due to limitations imposed by GLSL, but fragment shaders can write to the z-buffer.  Given that detail, the raymarched voxel rendering pass should be done as the first rendering pass so that triangles, lines, point, etc. z-buffer clip against voxels rendered.

At some point I'm going to have to implement multiple raymarched voxel rendering passes for rendering moving voxel objects (falling trees, rotating things), so I will have to introduce a programmer defined z-buffer into my code to allow for clipping of multiple voxel rendering passes against each other.

(+1)

Indeed, If I run into any trouble I will ask :D I think I'll probably do the voxel raytracing first, and just have the meshing later, I'll have the voxel raytracer write to the Depth-Buffer and then just have the rasterizer compare depthbuffer distance and check if it should render that particular pixel or not. But I'll be doing all raytracing in a compute shader which will fill a texture and have that texture be read by the fragment shader later.

(+1)

PS: Thats how I plan to get GI, I'll just have all the rays check if they hit a voxel and if so add it to a buffer of hit voxels, then just have yet another compute shader shade all hit voxels, the result of which will be per-voxel GI. (not complete GI just area wide not full world GI)

That sounds like a good optimization technique!