PhiGent Robotics, a self-driving start-up centered on visual 3D, has obtained three rounds of financing within six months since its establishment. Why are investors drawn to this 10 month-old startup?
The evolution of self-driving technology continues. On the one hand, leaders like Tesla firmly believe in the cost-performance ratio and commercial applicability brought by cars equipped with computer vision-empowered cameras. On the other hand, LiDAR and other sensors are becoming more advanced, prompting researchers to deploy them and push the performance boundary of autonomous vehicles.
Regarding this issue, we interviewed Dr Du Dalong, the co-founder of PhiGent Robotics (in Chinese: 鉴智机器人), a less-than-one-year-old startup that just finished three rounds of financing. It bagged USD 30 million from the latest Series A funding round led by Ince Captial, followed by Atypical Ventures, 5Y Capital and GSR Ventures.
When sharing these stories for fundraising, Du said the "three bullets" helped achieve the goal. "Firstly, as pioneers in AI algorithms, software, and chips in China, the founding team of PhiGent Robotics has cooperated for eight years and accumulated experience in starting a business and breaking into the market. The core team members have profound technical strength and cross-scientific advantages. We also continue to attract engineering experts who serve top car companies, which enables us to quickly and solidly combine excellent engineering capabilities on the basis of technical advantages, and can provide complete solutions directly to clients' needs.“
"Secondly, technology breakthroughs and innovations are quickly achieved, and continuous iterations are made in application scenarios. Currently, we have launched an end-to-end data-driven self-driving solution based on a Software 2.0 architecture and visual 3D understanding. BEVDet, a new paradigm of self-driving perception, independently developed by us, continues to maintain the world's NO.1 ranking on nuScenes (an authoritative evaluation set of self-driving). This paradigm can achieve higher computing power utilization efficiency and less data demand. The leading edge of core technology makes us different from other self-driving companies and more likely to achieve technological leapfrogging and mass production," said Du.
He said the third “bullet” is our commercialisation capability. Within six months, we have already established deep cooperation with first-tier OEMs and mainstream Tier 1 suppliers, and have obtained commercial projects. This is due to the mature solutions we have developed with a series of features such as controllable cost, high adaptability, and modular delivery. It means we have quickly turned technology into product power, thus achieving a closed loop and iteration of technology, product and business. At present, our strategy has entered into high-quality mass production and wider commercialization deployment.
Du’s vision of democratizing AI hinges on solving key problems of self-driving. “The critical issue at the heart of developing driverless cars is achieving general artificial intelligence in two steps, namely, to understand the physical 3D world and the path to reach it.”
He added, “We consider only AI paradigms based on software 2.0 architecture (programmers, instead of writing codes by themselves, trust AI with the job) can keep tackling new perception missions. This is because software 2.0 architecture can disrupt data-driven AI algorithms to better mine the information generated by the cameras.” PhiGent Robotics adopts the new paradigm of end-to-end prefusion 3D perception based on multiple cameras, which greatly improves the robustness of perception and the scalability of the scene, and has gradually progressed to 4D perception. It can completely and continuously detect objects around the vehicle, realizing the interaction between the machine and the physical world."
Besides, the full stack development capability of software and hardware co-optimization makes our solutions have the characteristics of low computing power requirements, support for heterogeneous platforms, and modular delivery, so as to meet the needs of clients more flexibly."
He added, "Our ultimate vision is to promote the use of general AI robots to enhance the overall welfare and efficiency of society. Self-driving is currently the field with the largest market scale, the most extensive application scenarios and the highest ceiling. This is why we choose to enter the self-driving field."
Du is confident of realizing the 3D vision goal in self-driving.
As for sensors such as LiDAR, Du said LiDAR is a measurement tool that provides scanning results, which means most researchers have equal access to them, but there are differences between companies that explore 3D information. Animal vision has evolved over thousands of years and is proven to be applicable to the real world. Currently, inefficient AI algorithms and computing power affect the experience provided by vision solutions. Assuming that the information to be found is 100, we have only discovered 30 at present, and the remaining 70 need to be decrypted. Thus, the difference between the various self-driving solutions is the use of visual information. Therefore, he believes the key to realizing self-driving is to focus on AI, but not on sensors.
Meanwhile, Du said that PhiGent Robotics‘ solution is vision-based and also compatible with flexible configurations of Radar, LiDAR and other sensors. Thus, it can fully consider the dual needs of clients for cost-performance ratio and reliability, and meet the functional requirements of different scenarios.
The achievement of self-driving research should be shared globally. PhiGent Robotics also expects to expand to overseas. “We currently focus on China as we’ve seen the free wheel of landing self-driving programs, and we want to improve our products before promoting them to other places. Self-driving is universal, just like a Chinese driver knows how to drive in the US. Road conditions are not much different between countries. In a similar vein, self-driving solutions are scalable.”
In the process of realizing general AI, self-driving is an important component. As Du predicts, all new cars will have self-driving capability in five to ten years. It’s likely that some of these cars equipped with ‘brains’ will be able to process visual information in a manner analogous to or even better than humans.