In this example, we will demonstrate how the OLED stack layer designed by Lumerical combines with the human vision capability of Speos to analyze display performance in a lighting environment, at cell phone level, and as perceived by human perception.
This example is based on the simulation shown in the webinar “Optimize Your Display with Ansys optiSLang, Lumerical and Speos”, available for free on our website. Please see that webinar for more details on this simulation workflow.
Overview
Understand the simulation workflow and key results
In this example, Lumerical STACK is used for the simulation of the light emission from the OLED stack. The STACK simulation results are also exported in a format that can be imported by Ansys Speos for device visualization and photometric analysis.
Step 1: Lumerical to Speos coupling
In this step, we will convert the data from Lumerical STACK to Speos format. It will be used as a light source in Speos.
Step 2: Display emission setting
We converted the data from STACK, and we will use the spectral Source is defined by brightness value and spectral intensity distribution.
Step 3: Display calibration
Now we must calibrate the display for comparison and realistic simulation. We adjust the average luminance to a brightness value that we can get from cell phones currently on the market. Then, we do the calibration of the white color by changing the ratio between the red, green, and blue intensities. We adjust them to get the same chromaticity coordinates than the white illuminant D65.
Step 4: Human Vision result with white screen
For the analysis of the simulation results we are going to use a unique feature from Speos, known as Human Vision and basically, it tells you how your product will look through the eyes of the observer.
With Human Vision, we can simulate how our eye perceives colors, textures, and glare effects, and you can perform accurate visibility studies for display.
In this step, we will show human vision results with white screen at different viewing angles. We will also find color variation at different viewing angles.
Step 5: Human Vision result with Image screen
In this step, we will analyze Human Vision of display with an image screen.
With an image on the display, the color variation is still visible especially at high incidence angles.
With Human Vision, we can pretend to see as someone with colorblindness, which affects about 8% of the male population, and impacts our ability to see colors or differences in colors.
We use the legibility tool in the Ansys Speos to figure out whether we can read the message on the display.
Run and results
Instructions for running the model and discussion of key results
Step 1: Lumerical to Speos coupling
- Open ProcessRGBTxtFiles.py
- Modify the storage path to “**\Display_Project_2022R1\SPEOS input files”
- Run python script
The script file contains the required ANSYS Speos version, Python version, and the required Python library files.
Running the script will convert the “SPEOS_ file_ B.txt”, “SPEOS_ file_ G.txt”,”SPEOS_ file_ R.txt” into spectral intensity distribution maps that can be read by Speos. It will generate three new files for Speos, “SPEOS_ file_ B.xmp”,”SPEOS_ file_ G.xmp”,”SPEOS_ file_ R.xmp”.
Open the XMP file with Virtual Photometric Lab 2022R1 in the start menu and you will get the following results.
Blue Green Red
Speos -Spectral Intensity Distributions cd / nm
Step 2: Display emission setting
In Ansys speos, the emission characteristics of display are related to two parameters, one is light output, and the other is spectral intensity distribution. During initialization, we normalize the output power of each light source to 1W. Different light outputs can be adjusted in the Virtual Lighting Controller function of Ansys speos. The spectral intensity distribution describes the angular distribution of light in space, and its settings are as follows:
- Open A01_Human Vision_Final Data_ASP.scdocx
- Double-click “Screen_Blue_from_Lumerical” in Sources in Speos Simulation panel, choose “library” for type option, and choose “SPEOS_ file_ B.xmp” for “Intensity file”, then click “”, you will find the distribution pattern shows in the model. As figure 3 shows.
- Double-click “Screen_Green_from_Lumerical” in Sources in Speos Simulation panel, choose “library” for type option, and choose “SPEOS_ file_ G.xmp” for “Intensity file”, then click “compute”, you will find the distribution pattern shows in the model. As figure 3 shows.
- Double-click “Screen_Red_from_Lumerical” in Sources in Speos Simulation panel, choose “library” for type option, and choose “SPEOS_ file_ R.xmp” for “Intensity file”, then click “compute”, you will find the distribution pattern shows in the model. As figure 3 shows.
If you open the data, you will find the preset scene, as well as all the material properties and light sources. At the same time, we set five sensors from different points of view to double-check the color variation and perceive the mobile phone product with Human Vision. As shown in the following figure.
3D Scene
Now let’s have a look on how we model the display. Once you done above steps, you will see below in 3D window.
Blue Green Red
The method used is to convert the data from STACK, and use the intensity distributions for the red, green, and blue to define an RGB white source for the display.
This basically creates a Reduced Order Model of the full system because we don’t have the layers information anymore, but we have the right color and right distribution information.
So, Lumerical to Speos coupling offers some IP protection.
Finally, on top of the RGB white source, we use a texture to produce an image on the display, to have a better Human Vision Experience. As below shows.
Display Reduced Order Model
Step 3: Display calibration
Now we must calibrate the display for comparison and realistic simulation. We open the normal viewpoint simulation result. Perform the following operations:
- Run Direct.Lumerical.Uniform
- Click Direct.Lumerical.Uniform.Radiance Sensor_Normal.xmp
- Open “Virtual lighting controller”
- Adjust ratio of three light sources
- Open measurement tool
- Adjust measurement area to cover display emitting area as big as possible
- Get chromaticity coordinates and average luminance from the measurement tool.
As below figure shows.
Display Calibration
We adjust the average luminance to a brightness value that we can get from cell phones currently on the market. We set the brightness to 975cd/m2.
Then, we do the calibration of the white color by changing the ratio between the red, green, and blue brightness. We adjust them to get the same chromaticity coordinates as the white illuminant D65.
Step 4: Human Vision result with white screen
Here we conduct a reverse simulation. In the reverse simulation, we do not add the light source of the display, because this step has been done in the previous step. We only need to simulate the result of the mobile phone under the ambient light, combine this result with the previous result, and do the following operations:
- Open A01_Human Vision_Final Data_ASP.scdocx
- Run Inverse.Lumerical.Ambient
- Click the “Viewers” command in the toolbar and select “Virtual Photometric Calc”
- In the “source file” column, select the two files that need to be added, and select the results of the normal perspective of direct simulation and the normal perspective of reverse simulation we have done before. Select the “map union” method in the “operation” column, select the directory to store the operation in the result column, and click “process”
- Click the “Viewers” command in the toolbar and select the “virtual human vision lab” program to open the file generated in the previous step for human visual evaluation
- Using the above method, the results under the 25 degrees viewing angle and the 50 degrees viewing angle are processed in the same way
Map Union
Now, let’s look at the results with Human Vision as below image shows.
0deg 25deg 50deg
Human Vision at different viewing angles
For each result, we can see the contribution of the calibrated screen with white color plus the contribution of the ambient lighting reflected on the protective glass of the cell phone.
Because it is daylight, it is photopic vision.
The first column is the point of view at normal incidence, the second column is for 25 degrees incidence, and the third column is for 50 degrees incidence.
We can see that the design turns out greenish when we increase the incident angle.
We use the observer sensor to obtain some metrics of color variation. Therefore, we can see the change of color on the chromaticity diagram from normal incidence to 80 degrees incidence with a 10-degree step.
Follow the below operations:
- Open A01_Human Vision_Final Data_ASP.scdocx
- Run Direct.Lumerical.Uniform.80deg
- Run Inverse.Lumerical.Ambient.80deg
- Click the “Viewers” command in the toolbar and select “Virtual Reality Lab”
- Click the “Creation” command in the toolbar and choose “Operations with Speos360 Files”, as below image shows
- Add above two results and do the “Union” operation, then click “Create”
Create Union Map of Speos 360 File
The simulation result is as follows:
Color shift over viewin g angles in Human Vision
The chromaticity diagram as below shows:
CIE 1976 Chromaticit y Diagram
Step 5: Human Vision result with image screen
- Open A01_Human Vision_Final Data_ASP.scdocx
- Run Direct.Lumerical.Image
- Click the “Viewers” command in the toolbar and select “Virtual Photometric Calc”
- In the “source file” column, select the two files that need to be added, and select the results of the normal perspective of direct simulation and the normal perspective of reverse simulation we have done before. Select the “map union” method in the “operation” column, select the directory to store the operation in the result column, and click “process”
- Click the “Viewers” command in the toolbar and select the “Virtual Human Vision lab” program to open the file generated in the previous step for human visual evaluation.
- Using the above method, the results under the 25 degrees viewing angle and the 50 degrees viewing angle are processed in the same way
According to the combination of R, G and B in the above “Virtual Lighting Controller”, we get the following human vision results:
Normal position 25deg 50deg
Human Vision at different viewing angle
The operation steps are as follows:
- Open A01_Human Vision_Final Data_ASP.scdocx
- Run Direct.Lumerical.Chart
- The operation steps are the same as above 3~5
- Click the “Tools” command and select the “Vision parameters”, then choose “Observer” tab
In the Color deficiency area, you could try below 6 types of color deficiencies.
Color Deficiency
The first simulation result is the normal eye, the 4 following simulation results show the red-green deficiency and the last simulation result shows the blue-yellow deficiency.
All 3 cones Protanomaly Pro tanopia
Deuteranomaly Deuteranopia Tritanopia
Results of Different Color Deficiency
Human Vision: Legibility
- Open A01_Human Vision_Final Data_ASP.scdocx
- Run Inverse.Lumerical.Ambient.Far
- Map union of “Inverse.Lumerical.Ambient.Far.Radiance Sensor.Far.xmp” and “Direct.Lumerical.Image.Radiance Sensor.Far.xmp” two result file
- Open above result with “Virtual Human Vision Lab” program
Now, we use the legibility tool to figure out whether we can read the message on the display.
The technique is to do a close-up simulation then measure the letter luminance, the background luminance around, and the angular size of the letter in arcminutes.
Legibility Analysis
Human Vision computes this graph that gives the Relative Visual Performance over age. RVP is a CIE standard that basically gives a perceived contrast over age.
Let’s assume our target user is 35 years old, we get contrast and RVP value. We can use the contrast threshold for daylight from an ISO standard for an automotive interior display and try to increase the RVP value.
For this, the legibility tool tells us we can lower the background luminance and increase the letter luminance or increase the letter size.
We can double the size of the letter or change the lighting conditions, for example, we can have an overcast day with a low amount of natural ambient light. Those modifications automatically update the result, and we can see how they improve the contrast.
And that’s the way you can tell if someone between 20 and 100 years old can read what is written on the display.
Important Model Settings
Description of important objects and settings used in this model
Texture Normalization Setting
Texture application can have an impact on the simulations results. To control what is taken into account for simulation, a texture normalization must be selected. Open a simulation. Right-click the simulation and click Options. In the Optical Properties section, check Texture. From the Texture normalization drop-down list, select None in this case. More info in the Speos User Guide: Texture Normalization.
Geometrical Distance Tolerance (GDT) Management
GDT defines the maximum distance to consider two faces as tangents. When the distance between two faces is smaller than the maximum distance defined, the faces are considered a tangent. Such a setting is especially important between the optical components which are very close to each other. More detailed explanations can be found in Tangent Bodies Management .
XML Template
The XML template in the sensor definition is used to export the Measurements.
Additional resources
Additional documentation, examples, and training material
See also
Relevant Ansys Learning Hub courses on Speos:
Lumerical part of the example:
- Planar OLED Microcavities - Color Shift and Extraction Efficiency
- Planar OLED Optimization - optiSLang Interoperabillity