Dealing with Shadow Map Artifacts

5 minute read

In a previous post on stack stabilization, the linked video showed a few major issue with shadow mapping.  These issues have plagued the technique since it’s inception, and while there are many methods that assist in alleviating them, it’s still very difficult to completely get rid of them.  Here we’ll review some common artifacts and discuss potential ways to squash them.

Perspective Aliasing

These types of artifacts are perhaps the simplest to alleviate.  Stair-like artifacts outlining the projected shadows are generally caused by the resolution of the shadow map being too low.  Compare the halves in the image below.  The top half shows a scene using a shadow map resolution of 256x256, while the bottom shows the same scene using a resolution of 2048x2048.

Shadow Map Resolution Comparison

Unfortunately, increasing the resolution will only get us so far.  Even at high resolutions, if the viewer is close enough to the receiving surface, tiny stair-like artifacts will still be noticeable along the edges of projected shadows.  The solution to this is to use a technique called percentage closer filtering (PCF).  Instead of sampling at one location, this algorithm samples several points around the initial location, weighs the results that are shadowed versus non-shadowed, and creates soft edges for the result.  The image below shows an up-close view of a shadow map with 2048x2048 resolution without and then with PCF enabled.

PCF Comparison

There are several different sampling patterns that can be used for the PCF algorithm.  Currently, I’m using a simple box filter around the center location.  Other sampling patterns, such as a rotated Poisson disc, are also popular and produce varying results.

Shadow Acne

Another common artifact found in shadow mapping is shadow acne, or erroneous self-shadowing.  This generally occurs when the texel depth in light space and the texel depth in view space are so close that floating point errors incorrectly cause the depth test to fail.  The image below shows an example of these artifacts present (top) and addressed (bottom).

Shadow Acne

There are a few ways to address this issue.  It’s so prevalent, that most graphics APIs provide a means to instantiate a rasterizer state that includes both a depth bias and a slope-scaled depth bias.  Essentially, during shadow map creation, these values are used in combination to offset the current value by a certain amount and push it out of the range where floating point inaccuracies would cause inaccurate comparisons.  One must be careful when setting these bias values.  Too high of a value can cause the next issue to be discussed, peter panning, while too low of a value will still let acne artifacts creep back into the final image.

Peter Panning

It’s frustrating when introducing a fix for one thing breaks something else.  That’s exactly what we can potentially end up with when we use depth biases for shadow maps.  Peter Panning is caused by offsetting the depth values in light space too much.  The result is that the shadow becomes detached from the object casting it.  The image below displays this phenomenon.  In both halves of the image, the blocks are resting on the ground, but in the top half the depth bias is so large that it pushes the shadow away from the caster, causing them to appear as though they could be floating.  The bottom half uses a more appropriate depth bias and the shadow appears properly attached.

Peter Panning

Bangarang!

Working in the Shader

Using hardware depth biasing in the rasterizer is nice in that it’s fast and easy enough to set up and get working.  Sometimes, however, we have different needs for our shadow maps and want to delay these types of correction steps until further in the pipeline. Though I’ve since reverted to a more basic approach, when first implementing transmittance through thin materials I switched my shadow map vertex shaders to output linear values to make the implementation a bit more straightforward.  If I used the rasterizer state offsets as described above, I would have to somehow track and undo those offsets before I could use the values effectively in my transmittance calculations, or else have major artifacts from depth discrepancies.  Fortunately, there are several excellent resources that describe alternative methods for getting rid of shadow artifacts (see references), and with a combination of ideas borrowed from all of them, I’ve been able to get a fairly decent implementation working.  Below is some example code in HLSL.

Storing linear values to the shadow map:

// client code
Matrix4x4f linearProjectionMtx = createPerspectiveFOVLHMatrix4x4f(fovy, aspect, nearPlane, farPlane);
linearProjectionMtx.rc33 /= farPlane;
linearProjectionMtx.rc34 /= farPlane;

// shadow map vertex shader
float4 main(VertexIn vIn) : SV_POSITION
{
    // transform to homogeneous clip space
    float4 posH = mul(float4(vIn.posL, 1.0f), worldViewProjectionMatrix);
    // store linear depth to shadow map - there is no change to the value stored for orthographic projections since w == 1
    posH.z *= posH.w;
    return posH;
}

Using a scaled normal offset in the light shader before transforming a point in world space by the shadow transform matrix.  I use a deferred shading pipeline and store data in the G-Buffer in view space, hence having to transform the new position by the inverse of the camera view matrix first:

#if DIRECTIONALLIGHT
    float3 toLightV = normalize(-light.direction);
#else
    float3 toLightV = normalize(light.position - position);
#endif
    float cosAngle = saturate(1.0f - dot(toLightV, normal));
    float3 scaledNormalOffset = normal * (cb_normalOffset * cosAngle * smTexelDimensions);
    float4 shadowPosW = mul(float4(position + scaledNormalOffset, 1.0f), inverseViewMatrix);

Once the point has been transformed by the shadow matrix, finish projecting it and apply a depth offset:

// complete projection by doing division by w
shadowPosH.xyz /= shadowPosH.w;
shadowPosH.z -= cb_depthBias * smTexelDimensions;
float depth = shadowPosH.z; // depth to use for PCF comparison

And that’s it.  The values for depth bias and normal offset have to be adjusted per light and depend on various factors, such as the light range, the shadow projection matrix, and to some extent the resolution of the shadow map, but when properly set the results can be quite nice and artifacts are almost entirely mitigated.

References

http://www.dissidentlogic.com/old/images/NormalOffsetShadows/GDC_Poster_NormalOffset.png

http://c0de517e.blogspot.co.at/2011/05/shadowmap-bias-notes.html

http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/218/Shadow-Acne.aspx

https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324%28v=vs.85%29.aspx

https://www.mvps.org/directx/articles/linear_z/linearz.htm

Updated:

Leave a Comment