Backface culling

From GPWiki
(Redirected from 3D:Backface Culling)
Jump to: navigation, search

Backface culling can be considered part of the hidden surface removal process of a rendering pipeline. Backface culling is the process by which polygons (usually triangles) that are not facing the camera are removed from the rendering pipeline. This is done by comparing the polygon's surface normal with the position of the camera.

Pseudo code

Backface culling is done almost exclusively on the GPU, software culling is not needed. However, a basic backface culling algorithm can be demonstrated by the following pseudo code:

Create a Matrix representing the negative camera position (call it Mcp) Create a Matrix representing the negative camera orientation (call it Mco) Create a Matrix called Mcam and set it equal to Mcp*Mco

FOR EACH Mesh IN Meshes {

  //translate and rotate all the points into (negative) camera space
  //this effectively makes the camera's location equivalent to the origin
  FOR EACH Point IN Mesh.Points
  {
     Point = Point * Mcam
     //you may wish to place the points in another list
     //   so as not to overwrite the point list.
  }
  FOR EACH Face IN Mesh.Faces
  {
     //get the points of the face
     Point p1 = Face.Point1
     Point p2 = Face.Point2
     Point p3 = Face.Point3
     
     //compute face normal from the points
     Vector Edge1 = p3-p1;
     Vector Edge2 = p3-p2;
     Vector FaceNormal = Edge1.CrossProduct(Edge2);
     //compute the see which way the face is facing in comparison
     //  to the camera (located at 0,0,0)
     float angle = p1.DotProduct(FaceNormal);
     IF angle < 0 THEN render the face
  }

}

Detailed explanation

For discussion, let's create a very basic rendering pipeline:

  • Polygon data is defined in object space
  • Polygons in object space are transformed into world space
  • Polygons in world space are transformed into camera space
  • Backface culling is performed
  • Front-facing polygons are rendered

There quite a few steps that have been left out in this process, and there are a couple alternate orders in which these steps can be performed. But this order will suffice for now. Hopefully, I will be able to add information on alternate steps and ways of doing things after we have explained the basics.

Polygon data is defined in object space

The triangles on the back side have been culled.

The first step in backface culling, believe it or not, is the loading of the polygon data. How the polygon is mathematically described will impact how backface culling is done. Most of the time, the data structure of a polygon is a mesh. If you are unfamiliar with the basic concept of a mesh, here is the basic concept:

Take look at the sphere in the diagram and note that it has been divided into triangles. Each triangle is called a "face". Each face is made of three points (vertices). A basic mesh is a list of points and a list of faces. Each point in a mesh is included in the mesh's point list only once. A face is a reference to three points, typically by an index.

Front-facing vs. back-facing

Take a look at the sphere in the diagram again. As you can see, some of the triangles that make up the sphere can be seen, and some of them cannot. The triangles on the front of the sphere can be seen, while the triangles on the back of the sphere cannot be seen. The triangles that can be seen are considered to be facing toward the camera. These triangles are defined as "front-facing". The triangles that cannot be seen are considered to be facing away from the camera. These triangles are called "back-facing".

Which way does a polygon face?

A sample face

Consider a polygon face (modelled in camera space) made from these three points:

  • p1(1,1,0)
  • p2(2,0,0)
  • p3(0,0,0)

We can determine which direction the face is facing by finding a vector that is perpendicular to the plane on which the face lies. To find this perpendicular vector, we must compute the cross product of two of the triangle's edges (see vector math):

  • (p3 - p1) x (p3 - p2)
  • = ([0,0,0]-[1,1,0]) x ([0,0,0]-[2,0,0])
  • = [-1,-1,0] x [-2,0,0]
  • = [(-1*0)-(0*0), (0*2)-(1*0), (1*0)-(1*2)]
  • = [0-0, 0-0, 0-2]
  • = [0, 0, -2]

There we have it: the surface normal of the face is (0,0,-2)...but what does that mean? It means that the face is "pointing" toward the negative side of the z-axis. That is valuable information, but the context must be considered.

Context is everything

In a left-handed coordinate system, the x-axis values increase from left to right, the y-axis values increase from down to up, and the z-axis values increase from backward to forward. That is, the z-axis value increases as objects move away from the viewer. In order to move an object closer to you, you have to decrease its z-axis location value...so moving an object in a negative z-direction brings it closer to the viewer.

If the context in which we are viewing our face is a left-handed coordinate system, then the face that we just examined is facing toward the viewer beause it is facing in the negative z-direction. If the context was a right-handed coordinate system, the face would be facing away from the viewer.

Winding up the points

Point-winding is the method of systematically defining the points of a face in a particular order. In order for the surface normal to be computed correctly, the points that define the face have to be in the correct order. In the sample face that we used, note that the points are defined in a clock-wise direction when viewing the face from the front side of the face. This is an important point. If the points were defined in a counter-clockwise order, the surface normal's direction would be reversed, which would indicate that the surface was facing away from us (in the left-handed coordinate sytem). If you care to, try flipping points 2 and 3 and the compute the surface normal and see what happens.

When winding points in a mesh's faces, it is important to remember that the order of the points is always defined as if you are looking at the face from the face's front. If you are winding the faces on the back-side of a polygon, rotate the polygon so that you are looking at it from the back, and define the points of the face in order while looking straight on to the face.

Please note that the winding order can be actually be clock-wise or counter-clock-wise; you just have to be consistent and adjust your definition of "front" and "back" appropriately.

That's almost everything...

In the sample we just looked at, the points were modelled as if they were already in camera space. This was done for the sake of explanation. In reality, the points will usually be in object space, and need to be transformed into world space and then camera space before the surface normal can be computed.

Polygons in object space are transformed into world space

There is nothing special about this step when it comes to backface culling. It is mentioned for posterity's sake. What happens here is the object's points are translated to their positions in world space.

Polygons in world space are transformed into camera space

What happens here is the object's points are translated to their positions in camera space. This is a key transformation in that the points are moved so that the camera is effectively at point (0,0,0) and the points (and faces) have been moved to their final position before rendering.

Backface culling is performed

At this point, the camera's position is at (0,0,0), which also means that our point's locations and faces are now where they are going to be during the rendering phase with their world origin being (0,0,0).

Vector dot product

Crappy Vector Comparison Sample

The vector dot product provides us information about the angle between two vectors (see vector math). When the dot product is 0, the vectors are perpendicular. When the dot product is greater than 0, the vectors form an angle that is less than 90 degrees. When the dot product is less than 0, the vectors form an angle that is greater than 90 degrees.

For any given face, the two vectors that we need to dot are the surface normal of the face, and one point on the face (which is now equivalent to the vector from the camera to the point due to translation into camera space). This will tell us about the angle between the camera and the direction that the face is oriented. If the dot product is less than 0, the surface is facing us. If the dot product is greater than or equal to 0, the surface cannot be seen.

In the diagram, the camera is located at (0,0,0), facing straight down the positive z-axis. The vector from the camera to one point on the triangle is represented by the dark red arrow. The triangle surface normal is represented by the green arrow (located at the origin!). We want to compare these two vectors to see if they are facing in (generally) the same direction. That is where the dot product comes in.

Application of the dot product

Once the polygon has been aligned to the camera, we take the surface normal and dot it with one of the polygon's points. If the dot product is less than 0, it means that the vectors are pointing in generally the same direction and we want to render the polygon. If the value is 0 or greater, we do not render the polygon.

An alternative backface culling pipeline order

There are other more efficient ways of performing backface culling. One way that I have been thinking about uses the following pipeline order:

  • Polygon data is defined in object space
  • Camera is positioned in world space
  • Camera is transformed into object space
  • Backface culling is performed
  • Polygons in object space are transformed into world space
  • Polygons in world space are transformed into camera space
  • Front facing polygons are rendered

In this pipeline order, we can save a lot of time in computation by performing polygon transformations only on the points/faces that will actually be rendered.

See also