Dealing with Shadow Map Artifacts

In a previous post on stack stabilization, the linked video showed a few major issue with shadow mapping.  These issues have plagued the technique since it’s inception, and while there are many methods that assist in alleviating them, it’s still very difficult to completely get rid of them.  Here we’ll review some common artifacts and discuss potential ways to squash them.

Perspective Aliasing

These types of artifacts are perhaps the simplest to alleviate.  Stair-like artifacts outlining the projected shadows are generally caused by the resolution of the shadow map being too low.  Compare the halves in the image below.  The top half shows a scene using a shadow map resolution of 256×256, while the bottom shows the same scene using a resolution of 2048×2048.

0_sm_resolution_comparison

Unfortunately, increasing the resolution will only get us so far.  Even at high resolutions, if the viewer is close enough to the receiving surface, tiny stair-like artifacts will still be noticeable along the edges of projected shadows.  The solution to this is to use a technique called percentage closer filtering (PCF).  Instead of sampling at one location, this algorithm samples several points around the initial location, weighs the results that are shadowed versus non-shadowed, and creates soft edges for the result.  The image below shows an up-close view of a shadow map with 2048×2048 resolution without and then with PCF enabled.

1_pcf_comparison

There are several different sampling patterns that can be used for the PCF algorithm.  Currently, I’m using a simple box filter around the center location.  Other sampling patterns, such as a rotated Poisson disc, are also popular and produce varying results.

Shadow Acne

Another common artifact found in shadow mapping is shadow acne, or erroneous self-shadowing.  This generally occurs when the texel depth in light space and the texel depth in view space are so close that floating point errors incorrectly cause the depth test to fail.  The image below shows an example of these artifacts present (top) and addressed (bottom).

2_shadowacne_comparison

There are a few ways to address this issue.  It’s so prevalent, that most graphics APIs provide a means to instantiate a rasterizer state that includes both a depth bias and a slope-scaled depth bias.  Essentially, during shadow map creation, these values are used in combination to offset the current value by a certain amount and push it out of the range where floating point inaccuracies would cause inaccurate comparisons.  One must be careful when setting these bias values.  Too high of a value can cause the next issue to be discussed, peter panning, while too low of a value will still let acne artifacts creep back into the final image.

Peter Panning

It’s frustrating when introducing a fix for one thing breaks something else.  That’s exactly what we can potentially end up with when we use depth biases for shadow maps.  Peter Panning is caused by offsetting the depth values in light space too much.  The result is that the shadow becomes detached from the object casting it.  The image below displays this phenomenon.  In both halves of the image, the blocks are resting on the ground, but in the top half the depth bias is so large that it pushes the shadow away from the caster, causing them to appear as though they could be floating.  The bottom half uses a more appropriate depth bias and the shadow appears properly attached.

3_peter_panning_comparison

Bangarang!

Working in the Shader

Using hardware depth biasing in the rasterizer is nice in that it’s fast and easy enough to set up and get working.  Sometimes, however, we have different needs for our shadow maps and want to delay these type of correction steps until further in the pipeline.  Though I’ve since reverted to a more basic approach, when first implementing transmittance through thin materials I switched my shadow map vertex shaders to output linear values to make the implementation a bit more straightforward.  If I used the rasterizer state offsets as described above, I would have to somehow track and undo those offsets before I could use the values effectively in my transmittance calculations, or else have major artifacts from depth discrepancies.  Fortunately, there are several excellent resources that describe alternative methods for getting rid of shadow artifacts (see references), and with a combination of ideas borrowed from all of them, I’ve been able to get a fairly decent implementation working.  Below is some example code in HLSL.

Storing linear values to the shadow map:

// client code
Matrix4x4f linearProjectionMtx = createPerspectiveFOVLHMatrix4x4f(fovy, aspect, nearPlane, farPlane);
linearProjectionMtx.rc33 /= farPlane;
linearProjectionMtx.rc34 /= farPlane;

// shadow map vertex shader
float4 main(VertexIn vIn) : SV_POSITION
{
 // transform to homogeneous clip space
 float4 posH = mul(float4(vIn.posL, 1.0f), worldViewProjectionMatrix);
 // store linear depth to shadow map - there is no change to the value stored for orthographic projections since w == 1
 posH.z *= posH.w;
 return posH;
}

Using a scaled normal offset in the light shader before transforming a point in world space by the shadow transform matrix.  I use a deferred shading pipeline and store data in the G-Buffer in view space, hence having to transform the new position by the inverse of the camera view matrix first:

#if DIRECTIONALLIGHT
 float3 toLightV = normalize(-light.direction);
#else
 float3 toLightV = normalize(light.position - position);
#endif
 float cosAngle = saturate(1.0f - dot(toLightV, normal));
 float3 scaledNormalOffset = normal * (cb_normalOffset * cosAngle * smTexelDimensions);
 float4 shadowPosW = mul(float4(position + scaledNormalOffset, 1.0f), inverseViewMatrix);

Once the point has been transformed by the shadow matrix, finish projecting it and apply a depth offset:

// complete projection by doing division by w
shadowPosH.xyz /= shadowPosH.w;
shadowPosH.z -= cb_depthBias * smTexelDimensions;
float depth = shadowPosH.z; // depth to use for PCF comparison

And that’s it.  The values for depth bias and normal offset have to be adjusted per light and depend on various factors, such as the light range, the shadow projection matrix, and to some extent the resolution of the shadow map, but when properly set the results can be quite nice and artifacts are almost entirely mitigated.

References

http://www.dissidentlogic.com/old/images/NormalOffsetShadows/GDC_Poster_NormalOffset.png

http://c0de517e.blogspot.co.at/2011/05/shadowmap-bias-notes.html

http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/218/Shadow-Acne.aspx

https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324%28v=vs.85%29.aspx

https://www.mvps.org/directx/articles/linear_z/linearz.htm

Stacks on Stacks

A long while back, I realized my scenes would be better served and more interesting if there was a more dynamic component to them.  Outside of the very basics, implementing  a proper physics engine with accurate collision detection and response was quite foreign to me.  Therefore, I picked up Ian Millington’s book Game Physics Engine Development and got to work.  I enjoyed the author’s approachable writing style and well-explained information on both particle and rigid body dynamics.  Within about a week or so, I was able to integrate a fairly robust adaptation of the engine presented in the book into my own engine’s architecture.

While the information presented on physical body simulation is quite good, the book’s main shortcoming is in collision detection and resolution.  In fairness, the author calls this out and tries to realistically set the reader’s expectations, but there’s a lot left to be desired when two boxes can’t reliably be stacked on top of one another due to non-converging solutions for contact generation and impulse resolutions.  Regardless, this is the approach that had lived in my engine for well over a year and still remains in the code base, although I consider it to be deprecated for anything beyond very simple simulations.

After a lot of research and a short back and forth email exchange with Randy Gaul, I tried my hand at implementing a more complex collision detection routine.  The new routine generated an entire contact manifold, as opposed to the old one, which only ever recorded one contact between two objects for any given point in time.  The contact manifold contained up to 4 points per collision pair.  This data, combined with a few other tricks I picked up here and there, finally allowed a small stack of boxes to sit on top of each other without shaking and falling over.

Eventually, I decided I wanted an overall more robust solution for both physics simulation and collision detection and resolution, so I spent a weekend integrating the Bullet Physics library into my engine.  Bullet’s API has proven to be reasonably straightforward, and I was able to get a stable stack of boxes set up in a very short amount of time.

The video below shows the dramatic difference in the old collision resolution method, and the newly implemented engine backed by Bullet.

With the old setup, I would place objects in the world with a sleep state and a tiny amount of space between each to give the appearance of a stack, but as soon as I interacted with anything in the stack, all bets were off.  With the new implementation, I can safely let objects fall into place and rest on top of each other at the start of the simulation without worrying too much about the whole thing going haywire.

(Regarding the ugly shadow artifacts in the video, those will be addressed in a follow-up post specific to the topic.)

References

http://www.randygaul.net/

http://allenchou.net/

https://code.google.com/p/box2d/downloads/list

https://github.com/bulletphysics/bullet3/releases

Bachelor Thesis Acknowledgment

I recently received an acknowledgement in Lukas Hermanns’ bachelor’s thesis entitled Screen Space Cone Tracing for Glossy Reflections, which I thought was really cool of him.  He’s produced some great results, and I’m happy to have lent a hand in the excellent work he’s done.

The full thesis can be found here:  http://publica.fraunhofer.de/documents/N-336466.html