In computer graphics, materials describe the way *light interacts with surface* of an object. The physics of light reflection, refraction, scattering and other phenomena involved, all the way to quantum mechanics, can be extremely complex. Computer graphics approaches simplify it to a number of models describing some of these components using simple functions — either data-driven or procedural.

One of the most generic descriptions of the material-light interaction is the rendering equation:

It describes the material-light interaction in terms of **incoming irradiance**, **outgoing radiance** and **material reflectance**, each of which depend on a number of parameters.

## Components of the rendering equation

The rendering equation consists of a number of components, each of which are functions themselves:

### Outward radiance

The output function represents the outward radiance — the **amount of light of a particular wavelength leaving a point on the surface** in a particular direction . It is therefore defined as a function:

### Emitted radiance

The term represents the amount of light of wavelength **emitted from a point on the surface** in a particular direction :

In practical applications, this term is:

**zero**for any surface that does not represent a lightsource (lights are often represented as point sources, either explicitly placed in the scene or generated by sampling any non-point lights; in this case the term is always zero)- representing an approximation of
**environment light**bounding from other surfaces, that would otherwise be expensive to compute.

### Integral over hemisphere

The second part of the equation represents the overall reflected light radiance , as an **integral over a hemisphere** , centered on a point on the surface, and oriented along the surface normal :

This integral determines the amount of light that is reflected in direction from all light hitting a particular point on a surface — the hemisphere describes all the directions that are not shadowed by the surface itself.

IMAGE

In practice, this integral is usually approximated using a **normalised sum** of samples:

With each sample determined using a form of importance sampling on the terms integrated over.

### BRDF

The Bidirectional Reflectance Distribution Function is a function describing what proportion of the incoming light of wavelength from direction is reflected in direction , given the surface material at point :

In practice, this function is often represented procedurally, using Lambert or Phong reflection models. More complex models include the Disney material model or even physically-measured values.

BRDF has a number of interesting properties^{1}:

- it is always
*positive*— - it is
*symmetric*(i.e., obeys Helmholtz reciprocity) — - and it
*conserves energy*(at no point can reflection produce additional light) —

### Incoming irradiance

The term describes the light intensity of wavelength hitting a surface at point from direction :

While the incoming irradiance is a continuous function, in practical applications it would often be replaced by discrete rays determined using importance sampling from the sources of light (or reflections) in the scene.

### Incidence angle factor

The incidence angle represents the angle between **incoming ray direction** and the **surface normal** at point .

The cosine of this angle then describes the **attenuation of the incoming radiance** caused by the relative orientation of the surface to the incoming ray. An alternative representation of this factor can use the dot product between the direction vector and surface normal to determine the relative attenuation:

## Parameters of the rendering equation

The rendering equation has a large number of inputs and parameters. In practice, both its terms and parameters are often simplified and/or approximated, to make it more suitable for its use in computer graphics.

This section attempts to describe some of them, and to provide more details about how they are used in practice.

### Implicit time parameter

One parameter that has not been mentioned in the equation above is *time* — as all of the terms of the rendering equation can be time-dependent, it is implied in all parts of the equation.

Practical implementations of rendering algorithms in computer graphics almost always **quantise time**, making a continuous time-dependent function (i.e., animation) into a series of **discrete frames**. The time value can then be handled in two ways:

- dealing with a frame as a
*static image*representing the scene at a particular time . This effectively simulates a camera with infinitely-fast shutter speed, freezing the motion into a perfectly sharp image. - simulating a real camera shutter by implementing
*motion blur*. This makes the time parameter into a continuous interval, which has to be discretized using sampling to produce an approximate result (making the rendering significantly more computationally expensive).

### Wavelength and colours

Another parameter present in all terms of the rendering equation is the wavelength of light. The distribution (spectrum) of wavelengths in incoming light determines the colour of the light, and the distribution of reflected wavelengths of a material determines its visible colour.

In rendering, we are interested only in the wavelengths of visible light, which is a small portion of the full electromagnetic spectrum. However, even then it is a continuous interval which requires sampling.

Fortunately, human color vision is limited to 3 types of cones, each responding to a particular range of wavelengths:

These are then approximated by **three discrete primary colours** (red, green and blue), leading to a relatively simple additive colour model. These three colours define the 3-dimensional RGB colour space, which contains “every colour” that is possible to represent with given set of primaries^{2}. For a great majority of cases, we can then approximate the continuous spectrum by 3 discrete wavelengths/colours for each primary^{3}.

### Spatial position

All terms of the lighting equation have the **spatial position** of the surface sample as one of their parameters, either explicitly, or implicitly by using it in computation of other parameters.

The **ray directions** and are computed by intersecting a ray with the surface, using rasterization of an object, or by determining a light vector from the point on the surface. Apart from special cases (e.g., distant lights), these direction vectors will be implicitly dependent on the spatial position of point .

The **incoming radiance** depends on the spatial position explicitly. This allows it to be dependent on the distance from light (i.e., the inverse square law) and the participating medium (i.e., attenuation). Similarly, the incidence angle factor and the half-sphere integration domain depend on the surface normal , which is in turn associated with the point on the rendered surface.

Finally, the BRDF function describes the *surface’s light interaction properties* at point (see below).

### Lights

Even though the integral over half-sphere is a continuous function, its evaluation in practice is almost always performed using discrete sampling, converting a continuous function into a weighted sum of discrete values^{4}.

In simple cases (e.g., the legacy OpenGL lighting), light sources are described as ideal points, effectively determining a single sample per light. In more complex cases (e.g., area lights, or environment lighting), importance sampling of light sources needs to be used to approximate the result of the integration over .

### BRDF (bidirectional reflectance distribution function)

The BRDF of a surface point is a 4-dimensional function describing the surface’s reflectance properties. As a BRDF can also vary on the surface of the object (e.g., patterns, different materials, colours), the total number of dimensions for fully describing a surface is 7.

As a 7D function is hard to represent explicitly, the BRDF is often represented as a composite function, with each element based on a reduced number of parameters. There are several ways these parameters can be described:

- using a
*volumetric (3D) function*, explicitly defining the parameter’s properties in 3D space. This can be either represented procedurally, or using a volumetric texture. - as the surface of an object is usually a 2D manifold in 3D space, its surface can be
*mapped onto a 2D texture*, with the reflectance parameters represented using a simple 2D image (explicitly or procedurally). - most simple lighting models, such as [Lambertian shading model] or the elements of Phong shading model convert the incoming parameters into a simple
*1D function*, approximating an isotropic material (symmetric along particular dimensions, points or vectors) - and, finally, many of its parameters are often
*uniform*(e.g., colour or emission)

The BRDF of an isotropic material can also be measured and represented in a compact form^{5}.

## Limitations

Rendering equation is quite generic description of material properties, but still has a number of limitations. The most serious one is probably its inability to represent subsurface scattering and light transmission. For these, more complex models do exist, but these might be subjects for another article.

- Simulating specific physical phenomena, such as Fluorescence might require the simulation to break some of these properties. ↩
- This is a significant simplification still – the colour models, primaries, their representation and their properties are a very complex topic. ↩
- This model is sufficient for most light interaction required in computer graphics. Examples of phenomena that cannot be represented in this way would include a rainbow, thin-film interference and iridescence. ↩
- Frequency-based or area-based approximation can also be used for represeting lights, such as spherical harmonics, which operate in frequency domain, or radiosity, which operates on flat finite elements of surfaces in the scene. ↩
- Several databases are available for research purposes online, such as the CUReT database and MERL BRDF database. Finally a full reflactance field can also be captured, allowing to represent a captured object from an arbitrary viewpoint with arbitrary lighting. ↩