RedBeard's Dev Blog

Static Ambient Occlusion in CubeWorld

Posted by redbeard on May 26, 2011

I was pleased with the visual impact of screen-space ambient occlusion (SSAO) in my deferred shading system, but I felt that there were two primary artifacts that were too big to ignore and which mean that SSAO is inappropriate for this project: 1) screen-space noise was quite noticeable despite the psuedo-gaussian blurring step, and 2) the ambient occlusion disappears at the edges of the screen when the occluding surface moves off-screen. I wanted ambient occlusion which was more stable and perhaps a bit less expensive to render, after all I’m just rendering a bunch of cubes!

For CubeWorld, static ambient occlusion appears to improve on the flaws of SSAO, without too many drawbacks. For non-cubular geometry the ambient occlusion calculation can become exceedingly expensive, which is why SSAO was invented, but the mostly-static cubes allow for a relatively discrete approximation. For my implementation, it effectively samples the ambient occlusion term at each vertex and allows interpolation on the GPU to smooth things out. For a visible face vertex, there’s a possibility of 0-3 adjacent cube volumes, so I give an AO term which ranges from 0-1 in 0.33 increments. Results look good (although this particular screenshot is a bit dark because I toned down the ambient light and the camera-position point-light).


Sticking point 1: I compute all my world geometry procedurally on worker threads in 32^3 chunks of unit cubes. At the boundary between chunks, there was no guarantee that any data was present in order to compute the ambient occlusion neighborhood properly, so visual seams were visible on continuous surfaces. To solve this issue, I separate my world generation into two phases – cube data and vertex buffers – and reduced the visible area without adjusting the data area so that a margin of 1 generated chunk existed around the boundaries of the visible chunk grid. This has the detrimental impact of either reducing my visual range or increasing my computational cost to keep the same visual range, because I now need an extra margin around everything. It would perhaps have been cleaner if I could generate a 1-cube margin around each chunk, but most of my chunk generation code is discontinuous and relies on a random number generator seeded based on the chunk ID, rather than the ID of the unit cubes. As a side effect of improving the cube neighborhood to look across chunk boundaries, my hidden face removal is now more aggressive and perf is slightly better since the game has less geometry to render.

Sticking point 2: When modifying the cubescape (adding or removing individual cubes), I was only updating the chunk in which the cube resides, but it now has potential to impact adjacent chunks also, both for ambient occlusion and hidden face removal. Easily fixed by updating adjacent chunks whenever the modified cube is on the surface of the chunk anywhere. I thought about adding an optimization to inspect whether the neighboring chunk would actually see any change, but haven’t bothered with that yet.

Vaguely related problems

  • I use the VPOS pixel-shader semantic with my deferred shaders to generate the texture coordinate used for looking up the corresponding texel for the currently rendering pixel. On the Xbox, VPOS behaves strangely if predicated tiling gets enabled, which is typically because you want a fat rendertarget due to MSAA, GBuffer, high resolution, etc. I imagine that the viewport transform or whatever feeds the VPOS semantic isn’t set quite right. I worked around this by disabling MSAA on my final rendertarget (since the underlying GBuffer textures are lower-resolution with no MSAA).
  • I was experimenting with occlusion culling to optimize my rendering in dense environments, but it appears to be essentially incompatible with deferred shading in XNA 4.0, for one flawed reason: you cannot disable color writes when a floating-point rendertarget is bound.
    • Attempting to do so produces an exception with the text “XNA Framework HiDef profile does not support alpha blending or ColorWriteChannels when using rendertarget format Single”; I assume that this is an oversight in the XNA API because I’m unaware of a reason why floating-point rendertargets cannot support disabled color writes, even in MRT situations. Here are the relevant device caps for various hardware: COLORWRITEENABLEINDEPENDENTWRITEMASKS; since XNA 4.0 requires a D3D10-capable video card, and those caps are enabled on 100% of D3D10 hardware sampled by that site.
    • My workaround for this flaw involves un-binding the offending rendertarget before issuing my occlusion queries and re-binding it afterwards, when I want to render real geometry again; this causes rendertarget toggling several times per frame, which is not ideal.
    • However, my workaround doesn’t seem to work on Xbox; the rendertarget contents preservation flag appears to be broken, so my GBuffer gets filled with bad data.

Leave a Comment

Your email address will not be published. Required fields are marked *