Rendering equation

In computer graphics, materials describe the way light interacts with surface of an object. The physics of light reflection, refraction, scattering and other phenomena involved, all the way to quantum mechanics, can be extremely complex. Computer graphics approaches simplify it to a number of models describing some of these components using simple functions — either data-driven or procedural.

One of the most generic descriptions of the material-light interaction is the rendering equation:

    \[ L_o = L_e + \int_\Omega \, f_r \, L_i \, \cos \theta_i \, d\omega_i \]

It describes the material-light interaction in terms of incoming irradiance, outgoing radiance and material reflectance, each of which depend on a number of parameters.

Components of the rendering equation

The rendering equation consists of a number of components, each of which are functions themselves:

Rendering equation

Outward radiance

The output function L_o represents the outward radiance — the amount of light of a particular wavelength \lambda leaving a point \textbf{x} on the surface in a particular direction \omega_o. It is therefore defined as a function:

    \[ L_o = f ( \textbf{x}, \omega_o, \lambda ) \]

Emitted radiance

The L_e term represents the amount of light of wavelength \lambda emitted from a point \textbf{x} on the surface in a particular direction \omega_o:

    \[ L_e = f ( \textbf{x}, \omega_o, \lambda ) \]

In practical applications, this term is:

  • zero for any surface that does not represent a lightsource (lights are often represented as point sources, either explicitly placed in the scene or generated by sampling any non-point lights; in this case the term is always zero)
  • representing an approximation of environment light bounding from other surfaces, that would otherwise be expensive to compute.

Integral over hemisphere

The second part of the equation represents the overall reflected light radiance L_r, as an integral over a hemisphere \Omega, centered on a point \textbf{x} on the surface, and oriented along the surface normal \textbf{n}:

    \[ L_r = \int_\Omega \, f(\omega_i) \, d\omega_i \]

This integral determines the amount of light that is reflected in direction \omega_o from all light hitting a particular point \textbf{x} on a surface — the hemisphere describes all the directions that are not shadowed by the surface itself.

IMAGE

In practice, this integral is usually approximated using a normalised sum of samples:

    \[ L_r \approx \sum_1^n \, \frac{1}{n} \, f(\omega_i(n)) \]

With each sample determined using a form of importance sampling on the terms integrated over.

BRDF

The Bidirectional Reflectance Distribution Function is a function describing what proportion of the incoming light of wavelength \lambda from direction \omega_i is reflected in direction \omega_o, given the surface material at point \textbf{x}:

    \[ f_r = f(\textbf{x}, \omega_i, \omega_o, \lambda) \]

In practice, this function is often represented procedurally, using Lambert or Phong reflection models. More complex models include the Disney material model or even physically-measured values.

BRDF has a number of interesting properties1:

  • it is always positivef_r(\omega_i, \omega_o) \ge 0
  • it is symmetric (i.e., obeys Helmholtz reciprocity) — f_r(\omega_i, \omega_o) = f_r(\omega_o, \omega_i)
  • and it conserves energy (at no point can reflection produce additional light) — \int_\Omega \, f_r \, \cos \theta_i \, d\omega_i \le 1

Incoming irradiance

The term L_i describes the light intensity of wavelength \lambda hitting a surface at point \textbf{x} from direction \omega_i:

    \[ L_i = f(\textbf{x}, \omega_i, \lambda) \]

While the incoming irradiance L_i is a continuous function, in practical applications it would often be replaced by discrete rays determined using importance sampling from the sources of light (or reflections) in the scene.

Incidence angle factor

The incidence angle \theta_i represents the angle between incoming ray direction \omega_i and the surface normal \textbf{n} at point \textfb{x}.

The cosine of this angle then describes the attenuation of the incoming radiance caused by the relative orientation of the surface to the incoming ray. An alternative representation of this factor can use the dot product between the direction vector \omega_i and surface normal \textbf{n} to determine the relative attenuation:

    \[ \cos \theta_i = \omega_i \cdot \textbf{n} \]

Parameters of the rendering equation

The rendering equation has a large number of inputs and parameters. In practice, both its terms and parameters are often simplified and/or approximated, to make it more suitable for its use in computer graphics.

This section attempts to describe some of them, and to provide more details about how they are used in practice.

Implicit time parameter

One parameter that has not been mentioned in the equation above is time — as all of the terms of the rendering equation can be time-dependent, it is implied in all parts of the equation.

Practical implementations of rendering algorithms in computer graphics almost always quantise time, making a continuous time-dependent function (i.e., animation) into a series of discrete frames. The time value can then be handled in two ways:

  • dealing with a frame as a static image representing the scene at a particular time t. This effectively simulates a camera with infinitely-fast shutter speed, freezing the motion into a perfectly sharp image.
  • simulating a real camera shutter by implementing motion blur. This makes the time parameter into a continuous interval, which has to be discretized using sampling to produce an approximate result (making the rendering significantly more computationally expensive).

Wavelength and colours

Another parameter present in all terms of the rendering equation is the wavelength of light. The distribution (spectrum) of wavelengths in incoming light determines the colour of the light, and the distribution of reflected wavelengths of a material determines its visible colour.

In rendering, we are interested only in the wavelengths of visible light, which is a small portion of the full electromagnetic spectrum. However, even then it is a continuous interval which requires sampling.

Visible spectrum

Fortunately, human color vision is limited to 3 types of cones, each responding to a particular range of wavelengths:

Cone response

These are then approximated by three discrete primary colours (red, green and blue), leading to a relatively simple additive colour model. These three colours define the 3-dimensional RGB colour space, which contains “every colour” that is possible to represent with given set of primaries2. For a great majority of cases, we can then approximate the continuous spectrum by 3 discrete wavelengths/colours for each primary3.

Spatial position

All terms of the lighting equation have the spatial position x of the surface sample as one of their parameters, either explicitly, or implicitly by using it in computation of other parameters.

The ray directions \omega_i and \omega_o are computed by intersecting a ray with the surface, using rasterization of an object, or by determining a light vector from the point on the surface. Apart from special cases (e.g., distant lights), these direction vectors will be implicitly dependent on the spatial position of point x.

The incoming radiance L_i depends on the spatial position explicitly. This allows it to be dependent on the distance from light (i.e., the inverse square law) and the participating medium (i.e., attenuation). Similarly, the incidence angle factor \cos \theta and the half-sphere integration domain \Omega depend on the surface normal \textbf{n}, which is in turn associated with the point x on the rendered surface.

Finally, the BRDF function f_r describes the surface’s light interaction properties at point x (see below).

Lights

Even though the integral over half-sphere \Omega is a continuous function, its evaluation in practice is almost always performed using discrete sampling, converting a continuous function into a weighted sum of discrete values4.

In simple cases (e.g., the legacy OpenGL lighting), light sources are described as ideal points, effectively determining a single sample per light. In more complex cases (e.g., area lights, or environment lighting), importance sampling of light sources needs to be used to approximate the result of the integration over \Omega.

BRDF (bidirectional reflectance distribution function)

The BRDF of a surface point x is a 4-dimensional function describing the surface’s reflectance properties. As a BRDF can also vary on the surface of the object (e.g., patterns, different materials, colours), the total number of dimensions for fully describing a surface is 7.

As a 7D function is hard to represent explicitly, the BRDF is often represented as a composite function, with each element based on a reduced number of parameters. There are several ways these parameters can be described:

  • using a volumetric (3D) function, explicitly defining the parameter’s properties in 3D space. This can be either represented procedurally, or using a volumetric texture.
  • as the surface of an object is usually a 2D manifold in 3D space, its surface can be mapped onto a 2D texture, with the reflectance parameters represented using a simple 2D image (explicitly or procedurally).
  • most simple lighting models, such as [Lambertian shading model] or the elements of Phong shading model convert the incoming parameters into a simple 1D function, approximating an isotropic material (symmetric along particular dimensions, points or vectors)
  • and, finally, many of its parameters are often uniform (e.g., colour or emission)

The BRDF of an isotropic material can also be measured and represented in a compact form5.

Limitations

Rendering equation is quite generic description of material properties, but still has a number of limitations. The most serious one is probably its inability to represent subsurface scattering and light transmission. For these, more complex models do exist, but these might be subjects for another article.


  1. Simulating specific physical phenomena, such as Fluorescence might require the simulation to break some of these properties. 
  2. This is a significant simplification still – the colour models, primaries, their representation and their properties are a very complex topic
  3. This model is sufficient for most light interaction required in computer graphics. Examples of phenomena that cannot be represented in this way would include a rainbow, thin-film interference and iridescence
  4. Frequency-based or area-based approximation can also be used for represeting lights, such as spherical harmonics, which operate in frequency domain, or radiosity, which operates on flat finite elements of surfaces in the scene. 
  5. Several databases are available for research purposes online, such as the CUReT database and MERL BRDF database. Finally a full reflactance field can also be captured, allowing to represent a captured object from an arbitrary viewpoint with arbitrary lighting.