Geomerics launched a new generation of technology that, we hope, will inspire developers of open world games to make lighting central to the experience. We have created a new set of technology which makes dynamic lighting effects, such as time of day, possible in open world games. The performance cost of global illumination updates in large scenes has been cut in half; with this extra performance studios are able to either include more dynamic lighting effects, improve quality or increase the map size.
Global illumination is quite complex. It not only needs to deal with direct light, that is light coming from a light source, hitting a surface and being reflected into the camera, but also with light that is reflected multiple times in the scene (indirect light). Together, these are too complex for real-time rendering so must be broken into components with different rendering paths, and then combined in the final shading.
Global illumination consists of three main elements:
- Direct lighting is the general input lighting and shadowing.
- Indirect lighting is the bounced light that is used to fill a scene (what we’ll focus on).
- Reflections, highlights and other view dependent elements which we’ll lump together under the term of specular reflection.
These combine together in the shader to give you the final scene.
Direct lighting is well covered by algorithms and hardware used by current game engines. Light coming from the scene and reflected on a specular surface, like a mirror or a shiny metal, can be well approximated by using screen-space reflection or cube maps as reflection probes.
This leaves us with indirect light i.e. light that is bounced around the scene by mostly diffuse surfaces before hitting the camera. It is one of the most computationally expensive parts of rendering and is what Enlighten solves in real-time.
The Global Illumination Problem
This blog isn’t going into any deep technical discussion, but the rendering equation gives us the correct result we are aiming for and we’ll (very) briefly review it, before going back to practical examples.
This intimidating equation is generally the correct way of lighting a scene and is the basis of virtually all rendering techniques. The equation expresses the light (or more correctly the radiance) coming from a surface point as the sum of the light emitted from that surface point and the integral (which is an infinite sum) of all the light that is reflected into this point multiplied by the reflectance of the material. This would be impossible to compute in real-time.
Yet, like all good engineers, we can develop workarounds. By assuming that we have a finite set of elements and diffuse transport only, things become much simpler.
This equation describes the solving of an individual patch, also known as a cluster. We are trying to calculate the radiosity (Bi), which is the sum of all the clusters it can see (Lj) multiplied by their importance, or contribution to the scene which is known as a form factor (Fij). In other words, the more of the field of view it takes, the more impact it will have on a scene.
n describes the number of form factors, which is imperative in setting the budget for the optimal quality/performance trade-off. Collecting these lights together, we may find that it is a strong red light, but we need to multiply it by the material properties of the surface (ρi). Lastly, we simply add any emitting light the cluster is generating (Le) to the output, which then solves one cluster. We then solve the light values of every cluster by repeating the process over and over again. This solution is also entirely frame-rate independent; hence you can run it on background threads, spread out the load or solve different areas at different frequencies.
So how does Enlighten work?
Enlighten works by precomputing the visibility of static geometry in the scene (i.e. that is the form factor ) which then makes it possible to modify the lighting in real-time, even on mobile platforms. Light sources, environment lighting and material properties (diffuse reflectivity and surface emission) can all be changed dynamically at runtime.
Similar to traditional global illumination solutions, Enlighten generates three types of output:
- Light maps are generated as 2D textures
- Light probes are generated as Spherical Harmonics
- Reflection captures are generated as cube maps
We have described these output types in detail in our previous blog.
Let’s look at a simple example below to explore the details on how Enlighten performs this precompute.
The precomputation phase automatically breaks up the scene into segments (or clusters as mentioned earlier). In this scene, we have a wall, floor and a cube. We will subdivide the wall and floor into four clusters each and superimpose them into the scene as visualised in figure 4.
For each individual pixel, Enlighten casts rays into the scene and calculates what clusters would affect it, or visibility. Thousands of rays are cast from each individual pixel to figure out which cluster will be hit first and the density of rays hitting it. The weighted sum of all the rays hitting one cluster is the form factor for that pixel in that cluster.
In this example, pixel “X” would be affected by (or “sees”) the entire face of the cube (D), and parts of clusters A, B and C. D only covers part of the hemisphere, so with weighting applied, this gives us a form factor of 0.2. Although A, B and C and visible by the pixel, parts of those clusters are hidden by the cube, so we end up with smaller form factors for them, depending on the visible surface area – being 0.05, 0.1 and 0.05 respectively.
Assuming that there is a bright spotlight shining directly onto the front side of the cube (D), which is scarlet red, the only light reflected would be from it (also scarlet red) with zero reflection coming from the wall as indicated by the black colour (not the true colour of the wall) as no light is hitting it.
At run-time, Enlighten would simply need to sum up the colour value stored per cluster and multiply it with the corresponding form factor. X would then be 0.2 multiplied by the light value/radiance. This results in a darker red tone.
In another case, let’s change the colour of the cube face and the wall to a carmine red. If we widen the aperture of the spotlight, but don’t change the light intensity that it emits into the scene, then all clusters (A, B, C and D) will emit a bounce and contribute to the colour of pixel X. X would then be 0.4 multiplied by the light value/radiance, lighting it with a darker red tone as previously.
Conversely, if we have three different spotlights hitting the different parts of the wall (clusters A, B and C) and not the black cube face (D), then the pixel (X) should be greyish as the light from the wall hits X then bounces off the cube face. However, the green section of the wall (cluster B) has much a greater impact on X, as a larger surface area is visible by it (hence the higher form factor of 0.1), giving the pixel a slight green tint.
In essence, Enlighten precomputes the visibility of static geometry in a scene then uses that data for simulating global illumination at run-time. During this phase, Enlighten will automatically break up the scene into clusters to calculate the relationship between them (form factors). As shown in the examples above, once we know the form factors, we only need to know the illumination of each cluster to calculate the colour of a pixel, allowing us to achieve dynamic global illumination in real-time. At run-time, the amount of indirect light transferred between clusters can also be controlled. This enables artists to fade in bounce between clusters which can be used to achieve effects such as opening doors or destroying a wall with the indirect lighting from one part of the level flooding in and updating in real-time.