The Intersection of LiDAR, Data and Coronavirus
April 22, 2020—Based in Shenzhen, China, RoboSense is one of many companies that produce LiDAR systems that could serve as the eyes of autonomous vehicles. What RoboSense says it's doing is helping to bring down the cost of these expensive technologies, which is one hurdle to making the tech more adaptable to consumer vehicles.
One way it's doing this is by combining the "vision" and "computing" parts of the system into a single package—sort of like having a webcam integrated into a computer or mobile device. RoboSense was an honoree for 2019 and 2020 CES Innovation Awards, and it also won a 2020 Edison Award for its LiDAR system. The organization behind the Edison Awards noted the product's "streamlined design, low cost, and high stability and manufacturability," all of which will be crucial to wider adoption of autonomy.
In this ADAPT Q and A, RoboSense Overseas PR & Event Operation specialist Yuki Chen talked about their LiDAR's applications for vehicles, including in this time of pandemic.
The company says that the RS-LiDAR-M1 sensor has data analysis capabilities in addition to information gathering. What sorts of analyses is it able to make in real-world driving scenarios?
For the environment perception of autonomous driving, the sensor hardware only collects data, thus AI perception algorithms for data analysis are needed. The RoboSense “RS-LiDAR-M1” is the world’s first and smallest MEMS Smart LiDAR Sensor to incorporate sensor hardware, AI perception algorithms, and IC chipsets. It transforms overpriced traditional LiDAR systems known as solely information collectors to full data analysis and comprehension systems.
In the LiDAR perception system for real-world driving scenarios, RS-LiDAR-Algorithms can analyze point cloud data in real-time, outputting obstacle detection, dynamic object tracking, obstacle classification and recognition, roadside accessible area detection, lane marking detection, and more for the automatic driving system, then speed up the environmental information data analysis and processing in real time for self-driving car’s decision making.
What are other ways that LiDAR can change the way vehicles operate on the road?
First, as the key redundancy of environment perception, LiDAR can be integrated into the sensing system of self-driving passenger cars to ensure passenger safety. In addition, LiDAR also change the commercial ways of mobility service and transportation, including MaaS (Mobility as a Service) like RoboTaxi and RoboBus, and the TaaS (Transportation as a Service) like Robotruck and logistic port automation, etc, RoboSense’s Smart LiDAR Sensor meets every automotive-grade mass production requirement for the industry including the Tass and Mass, including intelligence, low cost, stability, simplified structure and small size, vehicle body design friendliness, and algorithm processed semantic-level perception output results
Moreover, LiDAR is also an essential part to promote V2R solutions which is the data interaction between "vehicle-end" and "roadside" and installed on the roadside of smart city with the coverage of each base station monitoring range. For Smart City transportation development, RoboSense’s V2R solution, the RS-LiDAR-V2R, is consistently reliable, meeting automotive requirements with three core elements: LiDAR, AI perception algorithm, and SoC. It can output the real-time monitored traffic condition of a base station and from a “birds-eye” perspective, provide real-time perception of classification, movement speed, direction of movement of the participants, etc, and for the base station to distribute information to the surrounding vehicles through the 5G network. The RS-LiDAR-V2R can ensure rich and accurate environment information is outputted, regardless of severe weather to ensure the safety of automatic driving.
In 2019, Shenzhen Urban Transportation Planning and Design Research Center took the lead in applying the RS-LiDAR-V2R for use in city roads. The RS-LiDAR-V2R has also been used by China’s largest car-hailing platform for DiDi’s Cooperative Vehicle Infrastructure System.
We've seen some articles from RoboSense about the technology helping to mitigate the spread of COVID-19. Can you expand on that idea?
At the moment of the coronavirus outbreak, there is a shortage of frontline staff and a high risk of cross infection between people's close interactions, while Unmanned vehicles and robots can work in the frontline replacing human-beings during the COVID-19. To reduce the high risk of human interaction during the current virus’s spreading situation in Asia, RoboSense has cooperated with nearly 20 partners including Neolix, Gaussian, Alibaba’s Cainiao Robotics, Unity Drive Innovation, Zhen Robotics and others on unmanned vehicles and robots to deliver goods, disinfect and clean streets 24 hours non-stop. Below are some examples of our LiDAR’s use cases:
1. More than 20 hospitals including Beijing Union Medical College Hospital, Shanghai Children's Hospital, Shanghai Public Health Service Center and many other public service places are using Gaussian autonomous robots to clean and disinfect. It has also been used in Singapore and has been included in Singapore's "productivity improvement subsidy" national project.
2. In order to assist medical personnel in epidemic treatment and prevention, Candela deployed millions of medical disinfection and distribution robots to the emergency specialty field hospital built in response to the coronavirus pandemic (Wuhan Leishenshan Hospital and Wuhan Huoshenshan Hospital) and other hospitals.
3. Gosuncn’s patrol robot has an epidemic prevention and control mode, through human body infrared temperature measurement and screening, mask-wearing intelligent identification, remote speaking intercom, remote command dispatching, historical information backtracking and other functions, to solve the front-line security personnel work intensity, cross-infection, information recording. It has been used in most of the streets, airports and train stations in China.
RoboSense LiDAR provides these robots with perception ability that outperforms human eyes. In order to support the above emergent life-saving projects, we worked 24 hours non-stop for front-line technical supports to ensure the operation process and provided alternative plans in advance to our global partners, avoiding any potential supply chain issues.
In the future, as the autonomous robot has showcased a huge rising demand in the society, we aim to use our proprietary embedded AI perception algorithm which generates real-time semantic-level structural environmental information for the autonomous robots to make decisions faster and more precise.