Black Friday Kicks Off: How to Navigate the Latin American Market?
Nov 20, 2024 10:36 AM
Exploring Uncharted Territories in the Middle East: The Innovators Going Global
Nov 19, 2024 03:20 PM
Corner cases and long-tail effect are the barriers for the high-level autonomous driving, this article mainly analyzes the corner cases, and what is the current solution for CC. Also, this report contains an exclusive interview from an industry insider.
Auto x unmanned vehicle
In the autonomous driving sector, the Society of Automotive Engineers (SAE) has set a few guidelines describing the differing levels of autonomy in driverless cars. There are currently five levels in total, with Level 1 being the most basic and Level 5 being the most advanced. SAE L0 is no driving automation at all, SAE L1 and L2 are called advanced driver assistance systems (ADAS) which involve part of autonomous driving technologies. SAE L3, defined as conditional autonomous driving, requires partial human intervention, and L4, and L5 are high levels of driving automation.
To realize SAE L4 and L5 autonomous driving, the long-tail effect is a major barrier that needs to conquer. The concept of the long tail effect was first raised by Chris Anderson in 2004 to describe and measure the business model of Amazon and Netflix. Today the global autonomous driving industry is facing the long-tail impact challenge, too.
As shown in the figure above, commercialization is the long tail and technology is the head of this phenomenon. For the head, the cooperative vehicle infrastructure system (CVIS) or vehicle to everything (V2X) and highly intelligent vehicle are the two ways to realize L4 driving automation. CVIS or V2X bridges communication between vehicles to everything including roads, traffic signals, other vehicles, and cloud computing to help the vehicle drive by itself. However, for the highly intelligent vehicle, it does not need the vehicle to interact with things, it requires vehicles to have higher computing power and be installed with more intelligent sensors such as radar, LiDAR, and Cameras. When the vehicle drives on the road, it uses its own sensors to detect the scenarios around the vehicle. For example, a highly intelligent vehicle can predict whether pedestrians will cross the road or whether the car in front will change the lane based on experiences and data collected by machine learning (ML). However, the cost would raise sharply as more sensors are equipped on vehicles to achieve a higher level of autonomy. Apollo’s robotaxi is the cheapest L4 solution in China, but with a cost of around CNY 500,000 (USD 71,656), it is hard to put into mass production.
When talking about realizing L4 or L5 autonomous driving through two ways of autonomous driving—V2X and highly intelligent vehicles, the technological obstacle represents a corner case that needs to be solved.
Pareto Principle (80/20 Rule) of Corner Case
Rule 82 is that 80% of outcomes are impacted by the 20% causes. In autonomous driving, 80% are common cases, and 20% are corner cases. If auto companies cannot solve the 20% corner cases, the L4 and L5 autonomous driving will not be able to commercialize or be mass-produced.
The chart above shows the corner cases covering the sensor layer, content layer, and temporal layer. Sensor layer corner cases are related to sensors’ hardware level and physical level, such as laser error, pixel error, black cars disappearing, dirt on the lense, and impulse error. The corner cases of the content layer are about the domain, object, and scene, such as the shape of road, dust cloud, or sweeper cleaning the sidewalk. The last one is the temporal layer which is related to all scenarios of corner cases.
Some Ethical Issues Involved in Corner Cases
The self-driving vehicle is going through a momentous technological transformation. However, some ethical issues emerge in this transformation process in dealing with corner cases as well. For example, imagine an L4 autonomous driving vehicle runs on the road behind a big truck, then the truck in front comes to a sudden stop and unfortunately the autonomous car has a brake problem and is not able to stop on time. In this case, it has to make a decision whether to, say, hit a person on the left side of the road or a deer on the right side. The AI system would probably hit the deer to avoid the crash with the big truck. However, if it is not a deer but two pedestrians, one person on the left side of the road and one on the right side, how would the AI system make the decision? Also, by far no regulations about high-level autonomous driving vehicles has been released yet. With the development of the autonomous driving vehicle industry, regulations need to be introduced and improved step by step.
Kunyi Electronic (Chinese: 昆易电子), established in 2011, is dedicated to researching and developing the testing of embedded hardware and software, autonomous driving closed-loop datasets, and virtual simulation test equipment for companies in the automotive industry and rail transport.
Zhigang Fang, the product manager of Kunyi Electronic expressed his ideas on how to solve corner cases in his interview with EqualOcean. Fang said that there are two sensor solutions for the corner cases. One is installing cameras just like Tesla does, and the other is multi-sensor fusion (MSF). In China, most OEMs and autonomous solutions providers believe MSF represents the future trend. The camera passively receives information and signals, and at nighttime or in a very dark scenario, the data quality will not be improved even if the camera’s sensitivity increases. Moreover, the fact that cameras can make predictions based on algorithms could not eliminate the risks brought by low-quality data. MSF is obviously a complementary solution.
How to Collect Data
Robotaxi companies such as Pony.ai, WeRide, and Baidu Apollo are using virtual simulation testing software and have their own fleet to collect data. By the end of 2021, the three domanial players in the Robotaxi sectors are Baidu, Pony.ai and WeRide, the fleet size is 400+, 200+, and 300+, and their test mileage in kilometers are 16 million, 8 million, and 7 million, correspondingly.
Besides collecting data by fleet, these companies are also using virtual simulation testing software to simulate corner cases that rarely happen in the real world. For example, in the real world, it is almost impossible that a wall should stand in the middle of the road obstructing vehicles, but there are chances of this happening. So, a corner case like that can be tested on the simulation software, lowering the costs and risks involved in the road test by the fleet.
To date, the simulation software is getting more important in finding corner case solutions. The more efficient the data tested by simulation software, the more valuable the data is. It will not substitute fleet testing though. Corner cases are unpredictable, and it is impossible for engineers to figure out and cover all the scenarios without road tests in the real world. Also, the ultimate goal for Robotaxi companies is to achieve mass production of L4 vehicles, and the road test is still a compulsory requirement of vehicle manufacture. On the other hand, road tests in the real world will help promote the company’s brand.
How to make MSF work together as a whole in the utilization of the collected data
Decision-making process of a driverless car is based on road test. The car would autonomously find out the most similar corner case that is related to the current situation, and with the help of well-coordinated sensor fusion algorithms fed by sensory data, the “perception” of the car can be greatly improved. These algorithms take on the task of combining data from multiple sensors with unique pros and cons to determine the most accurate information about the objects. For example, if LiDAR contributed the most accurate data last time when facing a similar corner case, decision-making this time may take 80% of the collected data from LiDAR, 10% from radar, and 10% from the camera.
“To verify the data collected, Kunyi Electronics developed a verification tool to analyze data collected by both fleet and simulation software,” said Fang. “We cannot save all the data in the same file, it is too risky. So, we cut data into pieces, 5 minutes per piece. Then, we use the verification tool to analyze these pieces one by one, find out the biggest steering angle and fastest speed moments, and project LiDAR cloud point to the camera image for precision verification. This is how we test whether it is qualified data or not.”
Need to Upgrade
Currently, the way Kunyi Electronics verifies data represents a major method used in China’s autonomous driving market. It is accurate, but it takes a lot of time. For example, it cannot finish the analysis in seconds and has to spend the same amount of time as the recording process. The efficiency of the process still needs to be improved.
Conclusion
All in all, the autonomous driving sector is in a homogeneous competition state in China. Obviously, as the cost of sensors decreases, L4 vehicles will be cheaper. But ultimately auto companies able to conquer the last 20% of the corner cases will be the winner.
Black Friday Kicks Off: How to Navigate the Latin American Market?
Nov 20, 2024 10:36 AM
Exploring Uncharted Territories in the Middle East: The Innovators Going Global
Nov 19, 2024 03:20 PM