Parallax Occlusion Mapping in GLSL

This lesson shows how to implement different Parallax Mapping techniques in GLSL and OpenGL (same techniques are used to implement Parallax Mapping in DirectX). Following techniques will be described: Parallax Mapping, Parallax Mapping with Offset Limiting, Steep Parallax Mapping, Relief Parallax Mapping and Parallax Occlusion Mapping (POM). Also this article shows how to implement self shadowing (soft shadows) in combination with Parallax Mapping. Following images depict results of Parallax Mapping techniques in comparisson to simple lighting and Normal Mapping:

Basics of Parallax Mapping

In computer graphics Parallax Mapping is enhanced version of Normal Mapping, which not only changes behavior of lighting, but creates illusion of 3D details on plain polygons. No additional geometry is generated. Previous images show Parallax Mapping compared to Normal Mapping. You may think that Parallax Mapping offsets initial geometry, but instead it offsets texture coordinates that are then used to access textures with diffuse colors and normals.

To implement Parallax Mapping you need a heightmap texture. The heightmap in each pixel contains information about height of the surface. Height from the texture may be interpreted as amount of depth into the surface. In such case you have to invert values in the initial heightmap. Parallax mapping in this tutorial will use values from the heightmap as depth values. Black color (0) in the heightmap will represent absence of holes, and white (1) – holes of maximum depth.

Following examples of Parallax Mapping use three textures: heightmap, diffuse texture and normal map. Usually normal map is generated from heightmap. In our example the heightmap is treated as depthmap, so before generation of the normal map you have to invert the heightmap. You can combine the normal map and the heightmap into one texture (height in alpha channel), but for convenience this lesson uses three different textures. Example of textures for Parallax Mapping is here:

The main task of Parallax Mapping techniques is to modify texture coordinates in such a way that plain surface will look like 3D. Effect is calculated in fragment shader for each visible fragment of the object. Look at the following image. Level 0.0 represets absence of holes. Level 1.0 represents holes of maximum depth. Real geometry of the object is unchanged, and always lies on level 0.0. Curve represents values that are stored in the heightmap, and how these values are interpreted.

Current fragment is highlighted with yellow square. The fragment has texture coordinates T0. Vector V is vector from camera to fragment. Sample the heightmap with texture coordinates T0. You will get value H(T0) = 0.55. Value isn’t equal to 0.0, so the fragment doesn’t lie on the surface. There is a hole below the fragment. So you have to extend vector V to the closest intersection with the surface defined by the heightmap. Such intersection is at depth H(T1) and at texture coordinates T1. Then Т1 is used to sample diffuse texture and normal map.

So the main purpose of all Parallax Mapping techniques is to precisely calculate intersection point between camera vector V and the surface defined by the heightmap.

Basic shaders for Parallax Mapping

Calculation of Parallax Mapping (similarly to Normal Mapping) is performed in tangent space. So vectors to the light (L) and to the camera (V) should be transformed to tangent space. After new texture coordinates are calculated by Parallax Mapping Technique, you can use that texture coordinates to calculate self-shadowing, to get color of the fragment from diffuse texture and for Normal Mapping.

In this tutorial implementation of Parallax Mapping is in shader function parallaxMapping(), self-shadowing is in shader function parallaxSoftShadowMultiplier(), and lighting by Blinn-Phong model and Normal Mapping is in normalMappingLighting() shader function. Following vertex and fragment shaders may be used as base for Parallax Mapping and self-shadowing. Vertex shader transforms vectors to the light and to the camera to tangent space. Fragment shader calls Parallax Mapping technique, then calculation of self-shadowing factor, and finally calculation of lighting: