• Tuesday October 13th, 2015
  • Blogs

Virtual Reality

Virtual Reality Headset

Virtual Reality (VR) is a fully immersive computer simulated environment that gives a user the perception and feeling of being in that environment instead of the one they are actually in. This technology has unprecedented capabilities with numerous potential applications.

VR has been envisioned for decades; back in the 1990s there was a push to make this form of technology mainstream with the likes of Virtuality, Sega VR and Nintendo Virtual Boy. In those days however, the hardware was both bulky and expensive. Plus, the platforms themselves were also simply not up to the task of delivering a good VR experience as the technology offered limited graphics capabilities. It is fair to say that VR pretty much failed back then.

Today things are very different.

It is not until recently that our ever increasing technological capability finally made VR accessible with the emergence of high performing hardware capable of efficiently handling various types of workloads. This is mainly due to:

  • High resolution displays offering crisper images and realism.
  • Sensor technology which enables more immersive VR experience and control such as gyros, accelerometers etc.
  • Advanced microprocessors capable of executing complex calculations and functions, or delivering energy efficient visually stunning graphics.

Today, VR is touted as one of the ‘most disruptive’ technologies for the next decade, with a plethora of competing devices already in development, including: Oculus Rift, Sony VR, HTC Vive, OSVR Razer, Samsung Gear VR among others, vying to perfect this revolutionary new technology and make it ubiquitous.

This immersive technology represents the next generation of gaming, where the traditional ethos of bodiless minds communicating via keyboard and screen are replaced with the notion that the senses are primary causes of how and what we know, think and imagine.

At the centre of this is light – lighting is the most fundamental element of any game environment. It serves as a critical component of a realistic experience, where even minor artefacts will make an entire scene seem odd and out of place, detracting from the intended immersion.

Global illumination brings realism to a scene as it simulates the real-world behaviour of light, where objects and lights are interdependent on each other, rather than limiting it to the direct light hitting a surface immediately from a light source. It can bring three dimensional life to the geometry, amplify the emotional feel of a scene and draw the eyes to important areas of focus, thus creating a more accurate and realistic VR experience. For high-end gaming, this feeling of actually being in a scene is paramount.

Under the hood

So how does VR work? The headset uses a stereoscopic display to provide the wearer with a sense of depth and make what is seen three-dimensional, while the sensor input tracks the head movement for realistic visual feedback. Finally, a controller is usually included for the interaction with the virtual world.

Regular/traditional scene rendering

The traditional non-stereoscopic rendering process involves processing the sensor/controller input, this gives you the position where you are in the world or environment. Updating the scene graph involves setting the state of the scene, applying the textures, the shaders and all the assets associated with that scene. The final step is actually rendering of that scene, which is the GPU doing the work to get an image to the framebuffer. The GPU would then manage post processing and display. This is a an example of a single image render:

Regular Scene Rendering

Regular Scene Rendering

 

VR rendering

Stereoscopic rendering is a process required for VR in which a scene must be rendered for each eye to accommodate the two different viewpoints. These are separated by the interpupillary distance (IPD) – the distance between the two eyes, typically of around 63mm. From a rendering point of view this means that the sensor/controller input per render remains the same, as a person’s head is in one position. The textures, various objects and assets don’t really change either, hence the scene graphics are only processed once.

The position of the objects relative to each eye is different however; hence there is a need to do all of the processing for each eye separately. This rendering phase is also where most of the work really happens and places increasing demands on the GPU. Even from this very simplistic view, it is evident that the workload is significantly higher for the GPU as we need to render each scene twice.

Interpupillary Distance (IPD)

Interpupillary Distance (IPD)

 

 

 

 

 

 

Stereoscopic Rendering

Stereoscopic Rendering

 

Furthermore, many of the modern day VR solutions use lenses within the hardware to give the user a wider field of view and make the experience even more immersive. Typically they use a single panel (so you wouldn’t have a separate display panel for each eye) and use the lenses to effectively combine two separate images into one single view.

Lens Distortion

Lens Distortion

The two images are rendered side by side, yet the lenses introduce some distortion to the image. As the regular image passes through the lens, it distorts the output image to appear “pin cushioned.” To correct this distortion, the input image needs to be inversely distorted as part of the rendering process, where the opposite type of distortion (known as barrel distortion) is applied to the rendered image. This barrel distorted image goes through the lens and the output is the image as we intended to see it. The barrel distortion is applied to the image in the post-processing stage of the GPU.

The use of lenses also introduces an effect called chromatic aberration (or colour fringing), a common optical problem that occurs when a lens is either unable to bring all wavelengths of colour to the same focal plane, or when wavelengths of colour are focused at different positions in the focal plane. Hence, some post-processing and colour correction is also applied in the post-processing stage to remedy the issue, adding further workload on the GPU.

Final post-processing step

 

Lastly, delivering the ultimate VR experience requires extremely low latency from sensors to pixels on screen and extremely high frame-rates. A screen update that is lagging the speed at which the head is tracking will be perceived as lag and jittery. It doesn’t even need to be that slow for the experience to be ruined. Furthermore, the traditional 30-60 fps is simply not sufficient for a seamless VR experience. VR manufacturers are looking at refresh rates that far exceed 60fps – even over 100 fps is not uncommon.

Enlighten in Virtual Reality

So how does Enlighten come into this?

As explained above, VR is quite taxing and requires a capable GPU as, in essence, you are almost doubling the workload. There is a further demand on the GPU’s resources with the high frame-rate requirements added to this workload. High quality global illumination is an essential aspect of a scene that usually demands a high allocation of the GPU’s budget, hence there is a need for a more holistic, system-wide approach.

Enlighten works by taking the location of static geometry, or surface to surface visibility of the static geometry in the scene, precomputes it and compresses the data to be used at runtime. The Enlighten runtime then uses the precomputed data to compute the Enlighten output in real time. The Enlighten output changes depending on both the configuration of the lights and on the diffuse colours of the surfaces in the scene, information for which is provided by the game engine.

Friction-free rendering

Friction-free rendering

The game engine renderer runs on the GPU while the Enlighten runtime usually runs on the CPU. Enlighten does however, give you options to allocate the location of computation.

The Enlighten runtime is designed to run entirely asynchronously alongside the main renderer (friction-free rendering), without causing blockages. As Enlighten can compute dynamic indirect lighting on the CPU, independent of the GPU, it means that the system GPU can “compute once, render twice,” providing consistent results with no screen-space artefacts. This intelligent system-wide approach allows for a more intelligent allocation of computation for high quality global illumination.

By offloading aspects of the lighting from the GPU to CPU, it empowers developers to increase the complexity of effects and achieve even higher performance. Enlighten also supports the high frame-rate global illumination required, making it the ideal tool for VR applications.

Enlighten’s friction-free rendering along with high frame-rate support enable seamless and immersive virtual experiences possible. Reality it seems, finally has competition!

To learn more about Enlighten and its capabilities click here, or request an evaluation.

To find out more about ARM, click here.

Written by Alex Mercer.