Simulation that replicates what AV and ADAS cameras ‘see’


Simulation is key to subjecting autonomous vehicles (AVs) and ADAS to the large number of edge cases required to train AI systems and prove they are safe for use on public roads. Automotive engineering software specialist, rFpro, has developed a simulation technology for the development of autonomous vehicles (AVs) and ADAS, claimed to be the first to accurately simulate how a vehicle’s sensor system perceives the world, enabling sensor systems to be fully developed in the simulated world before physical prototypes are available.

Key to the technology is ray tracing rendering, a software-in-the-loop (SIL) system designed to generate synthetic training data. The system uses multiple light rays through the scene to capture all the nuances of the real world accurately. According to rFpro, this multi-path technique can reliably simulate the huge number of reflections and shadows that happen around a sensor, especially in low-light scenarios or environments where there are multiple light sources. Examples include multi-storey car parks, illuminated tunnels with bright ambient daylight at their exits, or urban night driving under multiple street lights.

Many of the physical test systems used in the automotive industry incorporate HDR (high dynamic range) cameras that capture multiple exposures of varying lengths of time, such as a short, medium and long exposure per frame. The rFpro system simulates the functions of these cameras with a multi-exposure camera API (application programming interface) that ensures the simulated images contain accurate blurring, such as that caused by fast vehicle motions or road vibrations, alongside physically modelled rolling shutter effects, to accurately replicate what cameras ‘see’.

Motion blur in cameras is accurately simulated, as illustrated by these traffic cones

The ray tracing rendering is applied to every element in a simulated scene, which has been physically modelled to include accurate material properties to create high-fidelity images. As this process is computationally demanding, it can be decoupled from real-time. The rate of frame rendering is adjusted to suit the level of detail required, which enables high-fidelity rendering to be carried out overnight and then played back in subsequent real-time runs. This function is designed to overcome trade-offs between rendering quality and running speed.

The ray tracing graphics engine is a fidelity image rendering system that sits alongside rFpro’s existing rasterization-based rendering engine. Rasterization simulates light, taking single bounces through a simulated scene, and sufficiently quickly to enable real-time simulation, such as driver-in-the-loop (DIL) tests.

“Ray tracing provides such high-quality simulation data that it enables sensors to be trained and developed before they physically exist,” stated Matt Daley, operations director at rFpro. “As a result, it removes the need to wait for a real sensor before collecting data and starting development. This will significantly accelerate the advancement of AVs and sophisticated ADAS technologies and reduce the requirement to drive so many developmental vehicles on public roads.”

Share this story:

About Author

Adam divides his time as an editor between the worlds of aviation and motoring. These worlds may seem a little diverse today, but autonomous technology and future urban mobility is bringing them ever-closer. Adam is also chairman of the Vehicle Dynamics International Awards.

Comments are closed.