top of page
Search

Z-Buffer Quirks: A Shallow Discussion

  • Writer: farcaptain
    farcaptain
  • Feb 22, 2023
  • 4 min read

Abstract

In 3D rendering, the Z-buffer algorithm is commonly used to determine which objects are visible and which are obscured by other objects. This article aims to discuss the precision issue in the Z-buffer algorithm, which affects the rendering quality of the final image.

Introduction

In 3D rendering, the Z-buffer algorithm is an efficient method for determining the visibility of objects. By comparing the Z values (depth) of each pixel on the screen with the Z values of the objects in the scene, the algorithm can decide which objects are visible and which are hidden. However, the precision of the Z-buffer can affect the accuracy of the rendering, especially in scenes with overlapping objects.


Precision of Depth Value

Depth value refers to the distance between a point and the observer's viewpoint in the 3D scene. In the Z-buffer algorithm, the depth value is represented by the Z coordinate of the point. The precision of the depth value is determined by the range of values that can be stored in the Z-buffer and the resolution of the Z-buffer. The depth value precision can be affected by the number of bits used to store the Z-buffer and the scaling of the Z values.

The precision of the Z-buffer can also be affected by rounding errors. When a floating-point number is converted to an integer, the decimal part is truncated, which can cause rounding errors. Rounding errors can accumulate and cause the Z values in the Z-buffer to deviate from their true values, leading to inaccurate depth testing and rendering artifacts.

To quantitatively approach this, Z-Buffer is actually an integer array under the hood. Let's use a common 24-bit Z-Buffer as an example, it stores up to 16777215 (2^24 -1) of depth differentiations.

This might look good enough. In fact, you may wonder, "what's is so important about the Z-buffer resolution, or precision of the depth value anyways"?


Depth Conflict

Depth conflict occurs when two or more objects in a scene have the same Z value. In such cases, the Z-buffer algorithm may not be able to accurately determine which object should be displayed in front of the other. This is often referred as "z fighting" and is a very common artifact in 3D video games. The problem of depth conflict can technically be alleviated by increasing the precision of the Z-buffer, but this comes at the cost of increased memory usage and processing time.

This seems bad. How do we take a good use of all these 16777216 depth differentiations, without having to buff up our graphics card infinitely?


Perspective Projection

I forgot to mention, the Z-buffer algorithm can be used with both orthographic and perspective projections. But we are mainly going to talk about perspective projections today. We'll use a lot of terms in perspective projection in the following chapters.

How is depth kept in Z-Buffer?

Let's say the depth value in NDC (Normalized Device Coordinate) is 0 ~ 1, which means we want the value to land within 0~1 after projection division. Z indicates the depth value after projection division. z indicates the depth value in world space.


- Equation 1

This might make little sense at first glance. But before we do projection division, what's stored in the z value of coordinates is not exactly depth, but az + b, and the fourth coordinate we've preserved (w coordinate), still stores z, which is the original z in the world space.

You might wonder, what this a and b doing right here? Let's substitute the near clip plane as n and the far clip plane as f into the equation:





As we can see, a and b are constants that are only relevant to near and far clipping planes.

Now that we get have Equation1, representing the projected depth ranging from 0~1, the rest is to cope with the physical limitation of the Z-buffer. We have to map this depth value ranging 0~1 into 16777216 actual differentiation of depth.


- Equation 2


This is when the rounding errors we mentioned earlier kick in, as Z-buffer is an integer array after all.

Let's use a simplified diagram.

When the depth is smaller, the z-buffer decreases faster. Thus the part of the world closer to the camera gets more precision whereas the further part gets less, which is actually quite intuitive.

If we want to describe the difference between two adjacent depth differentiations in Z-buffer, we can write it like this:




If we don't "manage" our Z-buffer properly, we will find ourselves running out of precision. To get a feeling about the decreased speed, check out at Learning to Love your Z-buffer. We'll give an example: If we have the near n = 1, and far f = 10000, we use Equation2:

  • z = 1000, z-buffer = 16762114, 15102 left

  • z = 9000, Zbuffer = 16777029,only 187 left

  • z = 10000, Zbuffer reaches the maximum.

As we can tell, within the range of 9000~10000, we have only 187 usable values. Therefore when z changes 5 times, the z-buffer sort of changes once - in this range, two depth needs to be 5 unit length apart, for the z-buffer the tell the difference.


How do we get it to be more precise?

Now that we know the cause of this, we can always create our own depth detection on CPU (Good luck with that.) Or we can use some experience.

Shorten the distance between the near and far plane, some article suggest that the proportion shouldn't be more than 1000. ( as suggested in the book ゲームプログラマになる前に覚えておきたい技術 - it is written in Japanese by a Sega engineer. Just trust me on this one.)

There are more method when it comes to specific scenarios, but the first step to tackle this is always understanding the cause and limitations behind this. Hopefully this article helped a little bit with that.



 
 
 

Comments


© 2035 by by Leap of Faith. Powered and secured by Wix

bottom of page