Variance Shadow Maps (VSM)

Variance Shadow Maps (VSM)

In modern real-time rendering, shadows play a crucial role in delivering visual realism. From subtle ambient occlusion to dramatic lighting, shadows give depth, context, and mood to digital scenes. However, generating smooth, soft shadows that look natural and still run efficiently remains a challenge — especially in real-time graphics like games and simulations.

Variance Shadow Maps (VSM) present an elegant solution to this challenge. Introduced by Donnelly and Lauritzen in 2006, VSM extends the traditional shadow mapping algorithm by using statistical moments of the depth distribution. Instead of storing a single depth value, VSM records both the mean and squared mean of depth in a texture, enabling efficient estimation of shadow probabilities and producing smooth penumbras.

This article explores the mathematical foundation, implementation details, and practical considerations of Variance Shadow Maps. We’ll also compare VSM with traditional shadow maps, outline its advantages, and discuss its known limitations such as light bleeding and variance artifacts.

Understanding Shadow Mapping

Before diving into VSM, it’s important to understand how traditional shadow mapping works and where it falls short.

The Basics of Shadow Mapping

A shadow map is a depth texture generated from the perspective of the light source. It stores the distance (depth) from the light to the closest surface for each pixel.

When rendering the scene from the camera’s perspective:

  • Each visible pixel’s position is transformed into the light’s coordinate space.
  • The pixel’s depth (from the light’s point of view) is compared to the corresponding value in the shadow map.
  • If the pixel’s depth is greater than the stored depth, it is in shadow (occluded by another object).
  • Otherwise, it is lit.

This method is simple and widely used but produces hard shadows — sharp edges without smooth transitions — because each pixel is either lit or not, with no in-between.

Problems with Traditional Shadow Maps

While traditional shadow maps are computationally efficient, they suffer from several visual and numerical problems:

  • Aliasing: Limited shadow map resolution leads to jagged shadow edges.
  • Hard edges: No natural soft shadow transition between light and dark.
  • Filtering difficulties: Simple depth comparisons cannot be linearly filtered, making anti-aliasing expensive.
  • Precision artifacts: Depth quantization causes shimmering and acne effects.

These limitations motivated researchers to find better ways to approximate smooth penumbras efficiently — leading to techniques like Variance Shadow Maps.

The Concept of Variance Shadow Maps

Variance Shadow Maps tackle the filtering problem directly by storing statistical information — not just a single depth value.

Depth as a Random Variable

VSM treats the depth stored in a shadow map as a random variable. Instead of recording only the mean depth, it stores both:

  • The expected depth E[z]E[z]E[z]
  • The expected squared depth E[z2]E[z^2]E[z2]

This allows us to estimate the variance of the depth values in a region: Var[z]=E[z2]−(E[z])2Var[z] = E[z^2] – (E[z])^2Var[z]=E[z2]−(E[z])2

This variance represents how much the depth values vary — which corresponds to how “soft” the shadow boundary is in that region.

Storing Depth and Squared Depth

During the shadow map rendering pass (from the light’s viewpoint), each pixel stores two values in a texture (usually RG channels of an RG32F or RG16F texture): VSM texture=(E[z],E[z2])\text{VSM texture} = (E[z], E[z^2])VSM texture=(E[z],E[z2])

When rendering from the camera’s perspective, these values are sampled and filtered (using linear or anisotropic filtering) to compute a probability that a point is in shadow.

Chebyshev’s Inequality and Shadow Probability

The Statistical Foundation

Variance Shadow Maps use Chebyshev’s inequality, a fundamental result from probability theory, to approximate the probability that a surface point is lit or shadowed.

Chebyshev’s inequality states that for any random variable ZZZ with mean μ\muμ and variance σ2\sigma^2σ2: P(Z≥t)≤σ2σ2+(t−μ)2P(Z \geq t) \leq \frac{\sigma^2}{\sigma^2 + (t – \mu)^2}P(Z≥t)≤σ2+(t−μ)2σ2​

Here, ttt is a threshold (in our case, the surface point’s depth from the light).

Applying Chebyshev’s Inequality to Shadows

We interpret ZZZ as the distribution of depths in the shadow map region. We want the probability that a sample’s depth is less than or equal to ttt (meaning it is visible to the light).

The probability that it is occluded is then: P(Z>t)≤σ2σ2+(t−μ)2P(Z > t) \leq \frac{\sigma^2}{\sigma^2 + (t – \mu)^2}P(Z>t)≤σ2+(t−μ)2σ2​

Therefore, the upper bound of visibility is: p=1−σ2σ2+(t−μ)2p = 1 – \frac{\sigma^2}{\sigma^2 + (t – \mu)^2}p=1−σ2+(t−μ)2σ2​

This gives a smooth, continuous shadow intensity between 0 and 1, depending on how far ttt is from the mean depth and how large the variance is.

Implementing Variance Shadow Maps

Rendering the VSM Texture

The implementation involves two main passes, similar to a standard shadow mapping setup:

  1. From the Light’s View (Shadow Pass):
    • Render the scene from the light’s perspective.
    • Store both the depth zzz and squared depth z2z^2z2 into the VSM texture.
  2. From the Camera’s View (Lighting Pass):
    • Sample the VSM texture using the transformed light-space coordinates.
    • Compute the mean E[z]E[z]E[z] and variance Var[z]Var[z]Var[z].
    • Use Chebyshev’s inequality to estimate the visibility.

Shader Implementation Overview

Vertex Shader (Light Pass):

// Outputs depth from light's POV
gl_Position = lightMatrix * vec4(worldPosition, 1.0);
float depth = gl_Position.z / gl_Position.w;
vsmData = vec2(depth, depth * depth);

Fragment Shader (Light Pass):

out vec2 outVSM;
outVSM = vsmData; // Stores (depth, depth^2)

Fragment Shader (Lighting Pass):

vec2 moments = texture(vsmTexture, lightUV).rg;
float mean = moments.x;
float meanSq = moments.y;
float variance = meanSq - (mean * mean);
variance = max(variance, 0.0001);

float depth = lightSpaceDepth;
float d = depth - mean;
float pMax = variance / (variance + d * d);
float shadow = clamp(pMax, 0.0, 1.0);

This shadow factor is then multiplied with the lighting contribution to produce smooth shadows.

Filtering and Soft Shadows

The Power of Linear Filtering

A key advantage of Variance Shadow Maps is that the stored depth moments (E[z]E[z]E[z] and E[z2]E[z^2]E[z2]) can be linearly filtered by the GPU’s hardware — something not possible with traditional depth comparisons.

By applying bilinear or anisotropic filtering directly on the VSM texture, we effectively compute averaged statistics over an area, leading to naturally blurred soft shadows.

This makes VSM ideal for achieving realistic penumbras, where shadows gradually soften with distance from the occluder.

Using Mipmaps for Variable Softness

To simulate the varying softness of shadows (larger penumbras for objects farther from the contact point), mipmapping can be used.

Each mip level of the VSM texture represents averaged depth moments over larger regions. Sampling lower mip levels yields wider, blurrier penumbras — all computed efficiently using built-in GPU hardware.

Comparing VSM to Traditional Shadow Maps

Visual Differences

Below is a conceptual comparison between Standard Shadow Maps and Variance Shadow Maps:

| Technique             | Shadow Edge | Filtering | Penumbra Quality |
|-----------------------|--------------|------------|------------------|
| Traditional Shadow Map| Hard         | Difficult  | Sharp, unnatural |
| Variance Shadow Map   | Soft         | Easy       | Smooth, realistic|

Illustrative Diagram:

Light Source
     |
     v
  +--------+-------------------------+
  | Shadow Map                      |
  |---------------------------------|
  |  Hard edges, aliased            |
  +---------------------------------+

  +--------+-------------------------+
  | Variance Shadow Map              |
  |---------------------------------|
  |  Soft gradient transitions       |
  +---------------------------------+

In practice:

  • Traditional maps produce crisp boundaries with visible jaggies.
  • VSM produces blurred edges that resemble real-world penumbras.

Performance Comparison

VSM generally has a slightly higher memory cost (two-channel texture instead of one), but it benefits from:

  • Hardware filtering
  • Fewer sampling artifacts
  • Better scalability for high-quality shadows

Thus, the visual improvement often outweighs the modest resource increase.

Advantages of Variance Shadow Maps

Smooth Penumbras

Because VSM uses statistical variance and linear filtering, it naturally produces soft transitions at shadow boundaries — closely resembling real-world lighting conditions.

Efficient Filtering

VSM supports all hardware-accelerated texture filtering methods:

  • Bilinear
  • Trilinear
  • Anisotropic
  • Mipmap-based filtering

This means that soft shadows can be computed without additional sampling or blurring passes, greatly improving performance.

Reduced Aliasing

Filtering over depth variance averages out aliasing artifacts that plague traditional shadow maps, yielding visually smoother results.

GPU-Friendly Implementation

Since VSM relies on simple arithmetic operations and standard texture lookups, it fits naturally into modern GPU pipelines and can be integrated with existing shadow systems.

Limitations and Artifacts

Despite its advantages, Variance Shadow Maps are not perfect. There are specific conditions where they produce visual errors.

Light Bleeding

The main issue with VSM is light bleeding — where light appears to leak into regions that should be fully shadowed.

This happens because Chebyshev’s inequality provides only an upper bound of the true shadowing probability. In regions of high variance (large depth differences), this bound becomes loose, and falsely bright regions appear.

Example (Conceptual):

Actual Scene:         VSM Output:
█████████ Shadow      ███░░░░░░ Light Bleeding

Mitigation Techniques

Several techniques have been proposed to reduce light bleeding:

  1. Variance Clamping:
    Limit variance to a maximum threshold to prevent extreme probability estimates.
  2. Depth Biasing:
    Slightly offset depth comparisons to favor occlusion in uncertain regions.
  3. Exponential Variance Shadow Maps (EVSM):
    Store exponential moments instead of raw depth moments, reducing light leakage significantly.
  4. Layered or Moment Shadow Maps:
    Extend VSM by storing more moments (e.g., 4th moment) to capture complex distributions.

Precision and Range Issues

Because VSM depends on floating-point arithmetic, numerical precision can cause problems at very large or very small depth ranges. Using floating-point textures (RG32F) helps, but precision must still be managed carefully.

Extensions of VSM

Since its introduction, VSM has inspired several extensions and improvements to further enhance shadow quality and reduce artifacts.

Exponential Variance Shadow Maps (EVSM)

EVSM replaces the linear depth values with exponential terms ecze^{cz}ecz, increasing separation between near and far values. This significantly reduces light bleeding while maintaining filterability.

Layered Variance Shadow Maps (LVSM)

By storing multiple variance layers (each representing different depth ranges), LVSM captures multimodal depth distributions, reducing bleeding in complex occlusion cases.

Moment Shadow Maps (MSM)

Moment Shadow Maps store up to four statistical moments of the depth distribution. By using higher-order moments, they more accurately reconstruct the underlying depth probability, nearly eliminating light bleeding at the cost of memory.

Practical Considerations for Game Developers

Choosing Texture Formats

  • Use RG16F for memory efficiency, or RG32F for precision.
  • Avoid integer formats, as filtering would not work correctly.

Managing Shadow Bias

  • Apply a depth bias during the shadow pass to minimize shadow acne.
  • Adjust based on light direction and scene scale.

Combining with Screen-Space Techniques

VSM can be combined with screen-space shadows or ambient occlusion to enhance realism further, especially in large outdoor environments.

Balancing Quality and Performance

For dynamic scenes:

  • Use lower mip levels for distant shadows.
  • Use higher mip levels (or raw VSM) for contact shadows.

This adaptive sampling maintains smooth performance and consistent visual fidelity.

Conceptual Diagram Summary

To visualize how VSM works compared to standard shadow maps, imagine the following layers:

LIGHT VIEW ---------------------
| Object casts depth shadow    |
| Store: E[z], E[z^2]          |
-------------------------------

CAMERA VIEW --------------------
| Transform fragment to light space
| Sample (E[z], E[z^2])
| Compute variance
| Apply Chebyshev’s inequality
| Result: smooth shadow intensity
-------------------------------

In contrast, a traditional shadow map only compares single depth values, resulting in binary hard-edged shadows.

Conclusion

Variance Shadow Maps (VSM) revolutionize how soft shadows are rendered in real-time graphics. By treating depth as a random variable and using Chebyshev’s inequality, VSM enables efficient, hardware-filtered shadow rendering that produces visually appealing, smooth penumbras.

Through simple extensions and careful parameter tuning, VSM provides a powerful balance between quality, performance, and implementation simplicity.

While not perfect — particularly due to light bleeding — its benefits have made VSM a foundational technique in the evolution of shadow rendering, influencing more advanced methods like EVSM and Moment Shadow Maps.

In summary:

  • Core Idea: Store both depth and squared depth.
  • Mathematical Basis: Chebyshev’s inequality for variance-based visibility estimation.
  • Advantages: Soft shadows, easy filtering, reduced aliasing.
  • Limitations: Light bleeding, variance artifacts.

For developers seeking realistic soft shadows without heavy computational cost, Variance Shadow Maps remain one of the most effective and elegant techniques available.

Similar Articles

  • Different Rendering Techniques: A Comprehensive Guide

    Rendering is a fundamental tool in the digital age, particularly for businesses involved in marketing, product visualization, and project presentations. With the right rendering techniques, businesses can create photorealistic images or captivating animations that elevate their projects. But with so many techniques available, how do you choose the best one? This guide explores the main…

  • Scanline Rendering

    When most modern discussions of computer graphics mention terms like rasterization or ray tracing, one technique often goes quietly overlooked: scanline rendering. Despite its venerable age, this method remains relevant in certain niches—embedded systems, retro-style graphics, low-power hardware, and stylized engines. In this article, we’ll explore how scanline rendering works by processing one horizontal line…

  • Z-Buffer Algorithm

    In the world of computer graphics, one of the most essential challenges is determining which objects, or parts of objects, should be visible in a 3D scene. This process—called hidden surface determination or visibility resolution—ensures that only the front-most surfaces of objects are rendered to the screen. Among the various methods developed for this task,…

  • Shadow Mapping

    In the realm of 3D computer graphics, shadows play an essential role in grounding objects, defining spatial relationships, and enhancing realism. Among the various techniques developed to render shadows efficiently and accurately, shadow mapping remains one of the most popular and widely used methods. From video games to animated films and real-time visualization tools, shadow…

  • Ambient Occlusion

    In the world of computer graphics, lighting is everything. Whether it’s a high-end video game, an animated movie, or a virtual reality experience, how light interacts with objects defines the realism of a scene. One of the most subtle yet powerful techniques used to make lighting feel natural is Ambient Occlusion (AO). Ambient occlusion doesn’t…

  • Transparency and Refraction

    Rendering transparent and refractive materials is one of the most technically challenging tasks in computer graphics. From crystal-clear glass to shimmering water surfaces, achieving realism requires careful consideration of light behavior, depth interactions, and material properties. In this article, we explore the core challenges of transparency and refraction, discuss foundational techniques like alpha blending and…