This article illustrates how Speos and optiSLang can be used to analyze and assess LiDAR systems’ performance in a scenario based on multiple parameters. This example will cover a workflow based on four parameters: fog density, the distance from the ego car (car with LiDAR sensor) to a van (target), LiDAR wavelength and target van reflectivity. Decision metric will be associated with LiDAR detection quality: average distance value given on a fixed amount of pixels in a XMP depth map.
Software Prerequisites
To use this example, the following tools and assets need to be installed on your computer:
- Ansys Speos 2023R2 or later
- Ansys OptiSLang 2023R2 or later
Overview
Understand the simulation workflow and key results
Since 2019, we can see exponential growth of LiDAR ( Li ght D etection A nd R anging) system’s presence in Automotive field. Some cars are, today, equipped by such kind of technology but this remains on premium/luxury cars mostly. Nevertheless, future, fully and partially, autonomous vehicles (cars and trucks) will be equipped with LiDARs. Some compagnies are already stating a fully autonomous vehicle should have up to 5 LiDARs.
As we know, the goal of many sensing systems, such as camera, radar, ultrasonic, & LiDAR, is to provide, to the vehicle (and the driver), information about its surrounding environment. Making sure the vehicle can “see” what is happening around it, is a critical point from a safety point of view. How can we make sure that LiDAR will be reliable in all situations? What are its limits? Can we identify which property of the LiDAR is limiting? In each scenario , which parameter will be the most impactful? These questions might be tricky to answer and would require long hours of field testing. Simulation is a definitive way to support field testing, by digging into edge-case scenarios. In this case, the edge-case scenario will test the detection of a van in a foggy environment. Fog is a limiting factor for Time-of-Flight LiDAR technology, however, depending on the wavelength and target’s reflectivity, we may easily detect it. The article will show you an example of a workflow analyzing LiDAR performance based on a multi-parametrized scene.
Step 1 : Parametric scene and LiDAR simulation with Speos
Understand Speos model
In this article, we demonstrate how to use Speos LiDAR simulation to obtain the depth map distance detection and how to publish parameter for the meta model creation. The Speos model is quite simple, containing four elements: an ego car (where LiDAR is mounted), fog block, a van (the target) and a road. Of course, user can add as many geometries as they want to push the study further. The model is parametrized by four parameters:
- Lidar to Target distance
- Fog Density
- Lidar Tx wavelength
- Target Reflectivity
Step 2 : LiDAR metamodel creation and sensitivity analysis with optiSLang
Understand optiSLang model
In this step, we demonstrate the design exploration capabilities by performing a sensitivity study to identify important input parameters and to create metamodels showing the relationship between input and output parameters.
Run and Results
Instructions for running the model and discussion of key results
Step 1: Parametric scene and LiDAR simulation with Speos
Scene models are composed of two kinds of geometries: dynamic and static. The static ones are all grouped into a Speos Light Box . The main benefit of this format, instead of using geometry, is to skip meshing geometry at each simulation iteration. Speos Light Box contains the mesh triangles of all geometries and is used directly in the simulation.
We are using a Flash (Static) LiDAR here. It means all Transmitter (Tx) beams will be shot all at once. And Receiver (Rx) will receive the signal all at once. And Receiver (Rx) will receive all the signal one time.
Tx is a gaussian 120x30 degrees beam. Rx has a Reduced Order Model (ROM) of its lens system containing distortion effect. The sensing part of Rx is a flat perfect sensor 16x9mm with 128x96 pixels.
Parametrization
The following section describes how the Speos set up has been parametrized. I order to connect Speos to optiSLang, for metamodel generation, parametric variables have to be created within Speos model. These parameters will be read afterward, by optiSLang. Parameters coming from Speos can be published with the following instructions :
- Open the Ansys Speos model [UseCase.scdocx], located in [.\[START]Bad_Weather_Condition_LiDAR\01_Reference]
- Perform a local Compute of the Static.LiDAR simulation.
- Select the Workbench tab .
-
Click on
Publish Parameters
.
- Select Static.LiDAR (under Car_LiDAR node).
-
Check “
Source – Spectrum/Wavelength
”
Parameters generated directly from Spaceclaim are automatically published and optiSLang will be able to detect them automatically.
Automation script
To make XMP depth map post processing automated, a Python script is used. This script will be run directly from optiSLang python virtual environment. But it can be also launched in a stand alone manner. The script follows these steps :
- Get XMP map available in working directory.
- Open Virtual Photometric Lab instance (equivalent to opening an empty Virtual Photometric Lab manually).
- Open XMP file available in the working directory.
- Create a Square shaped measurement area (0.25x0.25 mm).
- Get average distance value in this area.
- Export XMP map as image
- Close Virtual Photometric Lab instance
Above, a typical result gets from Speos simulation with default measurement area used for this case.
Below are the two extended outputs generated by the automation script : parametric scene screenshot and XMP map screenshot. These two images are in PNG format.
Step 2: LiDAR metamodel creation and sensitivity analysis with optiSLang
In this section, we will perform sensitivity analyses. It is composed of 4 main blocks:
- “Text Input” block: Contains .simplescattering file which is used as Van optical property.
- “Ansys Speos” block: Contains the Speos scene model parameters described in the section just above.
- “Ansys Speos Core” block : Contains Speos simulation parameters.
- “Python” block : Contains Speos simulation result post-processing.
A first sensitivity analysis can be performed on full range parameters, these parameters are displayed in the picture bellow. This analysis will provide global understanding of parametric scene. In this example, the full parameters range analysis gives us an idea where edge case area is located.
Full Range Sensitivity Analysis step by step :
-
Open
“
Bad_weather_LiDAR_Workflow.opf”
located in
“.\
Bad_Weather_Condition_LiDAR”
The “ AMOP (Global search) ” system shows the necessary solver chain to perform the LiDAR performance analysis. It is used to run the sensitivity analysis to get a design understanding and to identify important parameters.
Double click on the “ AMOP (Global search) ” module to see the defined sampling method, parameter ranges and decision criteria.
The system is defined to include pictures and to keep the result files of each simulation. Due to the size, these pictures and result files are not delivered with this example. To generate them, you may re-run the study. -
Optional : To re-run the sensitivity study, right-click on “
AMOP (Global search)
” and select
Start from here
. If a message pop, please click on
Reset
before continue.
HINT : The step-by-step tutorial shows how to create the automated workflow (solver chain) and how to set up the sensitivity analysis. - If you don't want to run the calculation, [END]Bad_Weather_Condition_LiDAR contains post processing reports.
- To review the results of the sensitivity analysis go in [.\[END]Bad_Weather_Condition_LiDAR] and open Bad_weather_LiDAR_Workflow.opf.
- Right-click on " AMOP (Global search) " and select show Postprocessing. The file already contains the results for the metamodel.
The main concern when working with metamodels is the prediction quality - how well the model can predict the outputs based on new given input values. optiSLang catches this in the Coefficient of Optimal Prognosis (COP) . The best way to review the sensitivity study is to review the COP matrix. The COP matrix shows that all values which are well predicted (shown in the column “Total”). Van_distance parameter has been detected as the most impactful and dominate all other parameters defined.
To highlight more parameters and bring more accuracy in the metamodel, we propose to perform a second sensitivity analysis. This time, Van_distance parameter is reduced to 65m to 75m range. This range corresponds to area where LiDAR starts to not detect anymore the target.
-
In “
Bad_weather_LiDAR_Workflow.opf”
Located in “.\ Bad_Weather_Condition_LiDAR”
Double-click on the “AMOP (Local search) (200_designs)” module to see the defined sampling method, parameter ranges and decision criteria.
In parameter section, we see updated parameters already set. -
Optional : To re-run the sensitivity study, right-click on “AMOP (Local search) (200_designs)” and select “Start from here.”
HINT : The step-by-step tutorial shows how to create the automated workflow (solver chain) and how to set up the sensitivity analysis. - To review the results of the sensitivity analysis go in [.\[END]Bad_Weather_Condition_LiDAR] and open Bad_weather_LiDAR_Workflow.opf.
- Right-click on “AMOP (Local search) (200_designs)” and select “show Postprocessing”. The file already contains the results for the metamodel.
The new COP matrix obtained includes, now, Wavelength parameter. It means this parameter is now impactful in the meta model.
The Metamodel of Optimal Prognosis (MOP) approximates the response as a function of all important input parameters. By clicking on one of the “Total” blocks, it updates the 3D Response surface plot. This plot is the representation of the metamodel based on the main contributor and the remaining ones are set to a fixed value. By using the sliders, you can see the influence of the other dimensions.
On the above picture’s left side, we have the MOP of the global search covering the full Van_Distance range. On the right side, there is the local search, with limited range for Van_Distance . Basically, we can consider it as a “zoom” in a local area of the global search MOP.
Global search MOP plot offers a coefficient of prognosis of 99%. This means the model predictivity is good. On another hand, Local search MOP gives a coefficient of prognosis equal to 38%, in this case, the predictivity can be considered as not good. Residual plot can highlight the fact that the detection value returned by the perception script is not very well matching with Van_distance used as input :
Magenta dots represent failing design and black dots successful ones. Lot of these point are placed far away from each other and generate a noisy distribution.
This shows that perception algorithm used to extract distance value detected LiDAR’s depth map reaches its limit. As mentioned in the "Automation Script" section, perception algorithm proposed here returns an averaged value of all pixels present in a delimited area. As fog generates noise, value on each pixel can fluctuate considerably, which leads to a wrong averaged value.
The downside here is, even if a majority of pixels are giving a correct distance, few of them are giving a value far away from the correct one. Then, the averaged value is lower than our +/- 5% global acceptance threshold which led to a target not detected even if, visually speaking, van is still detectable.
Of course, acceptance threshold can be changed following these steps :
- Double click on " AMOP " workflow box in optiSLang scenery view.
- Go in " Criteria " tab, next to "Start designs"
- Change values in " Limit " column.
For example, you can put 1.10 for "Detection_contraint_1" and 0.90 for "Detection_constraint_2". It changes the acceptance threshold from +/-5% to +/-10%.
“Taking the model further” section will give you some info to improve perception algorithm. Please remember, however, our purpose here is to show a complete workflow – having the best perception algorithm is not part of this purpose. We highly encourage you to test your own perception algorithm in this workflow.
Important model settings
Description of important objects and settings used in this model
Speos
Speos scene static elements such as road, road paints, etc., have been generated into a Lightbox,which is a pre-meshed body in Speos native format. This can help to shorten initialization time between each optiSLang iteration. Speos LiDAR simulation contains native geometric elements such as fog volume, ego car and van, and pre-meshed geometries, such as the road.
The simulation output used in this case is a depth-map in XMP file format. Only Flash LiDAR can natively handle such kind of output. If you play with the content, make sure LiDAR model is set as Type “Static”.
optiSLang
Speos node settings :
Double-click on the “Ansys Speos” node to change the settings :
-
Parametrization Tab:
- Define parameter that should be considered in the variation analysis.
-
Execution settings Tab:
- Define the Speos version to use (minimum version is 2023R2).
- Choose batch mode/GUI mode (GUI mode might be useful for bug fixing)
- Add Python script which executes before or after the design is updated (e.g., needed for coupling Speos with CAD-Tools like Catia/Inventor/NX/Creo).
-
Speos Simulation Tab:
- Select the simulations to be exported in the design directory and resolved by the Speos Core node.
-
Export Tab:
- Selected formats are written as result files in the design directory (e.g. export image file of the design, and include it in the optiSLang post processing to increase the design understanding).
Speos Core node settings :
- Number of cores used.
- Activate GPU solver (cannot be used for LiDAR simulation).
- Change number of rays/passes.
Updating the model with your parameters
Instructions for updating the model based on your device parameters
Another parameter useful to add in this project is LiDAR Tx beam distribution. Having a collimated or spread beam has an impact on target detection in a fog environment. To implement it, two options are available :
- Stay with Static (Flash) LiDAR model : In this case, .ies need to be managed as input of Speos. A new .txt reader node can be added into optiSLang workflow to link certains value in .ies to a parameter into optiSLang.
- Switch to Scan LiDAR : Here, the model can natively manage gaussian beam distribution. In this case, connection to optiSlang is straight forward. And follow the same steps as ones discribed, above, in Parametrization section.
Fog models used here is standardized can also be replaced. We chose to highlight our material library as these fog materials have been certified for certain distance detection. You can find, in the data ZIP file, a PDF called “ModellingReport_FOG_VOP,” providing additional information about Fog models used in this example. Nothing forbids you to implement a new model relying on Speos User Material's MIE theory .
As displayed on the image above, with this material definition, particles’ concentration can now be part of the metamodel, as there is a continuous value optiSLang can work on. As it is right now, fog materials are encrypted and cannot be used with a variable input parameter into the metamodel.
Taking the model further
Information and tips for users that want to further customize the model
There are multiple ways to take the model further. Of course, you have total freedom on it and adding new things on top of what we propose is possible. Bellow a non-exhaustive list of items that could be bringed to enhance the model :
- Rework perception script by making a sorting operation on measurement area's pixels values instead of an averaging one.
- Convert Static Flash LiDAR to Scanning one : Provide time dimension in addition.
- Add .OPTTimeOfFlight as simulation output : it can allow you to generate a .pcd and make detection on point cloud instead of depth map. You can also make signal post processing operations like converting energy value into photon count, which opens the door to detector (CMOS/SPAD) performance management.
- Add custom fog model based on MIE theory. As mentioned in the section just before, in this case, Fog becomes part of the metamodel, and we are not doing anymore a meta model per fog model.
Computing all these design points may take a certain time (we can count between several hours to few days depending of computing power available). To bypass this limitation in case of showing virtual LiDAR performance in a given environment, during a meeting for example, we can even think about the creation of a Digital Twin based on metamodel designs points.
Additional resources
Additional documentation, examples and training material