Recently, autonomous driving developer QCraft released the third generation of hardware based on its Driven-by-QCraft solution. According to the company, it will adopt Nvidia’s Drive Orin system-on-a-chip (SoC), which it hopes will enable accelerated development of its L4 autonomous driving solution.
QCraft currently operates a fleet of around 100 autonomous vehicles, powered by the same hardware solution. The company owns the full tech stacks for onboard software, including perception, mapping and localization, route planning, decision making and controlling.
In order to perceive information of traffic participants more stably, QCraft notes that is has adopted a multisensor fusion method in constructing a sensor system that can achieve 360° perception, without blind spots. This multisensor fusion suite uses a modular design, including two long-range measurement lidars (main lidar), three short-range blind-spot-filling lidars (blind-spot area lidar), four millimeter-wave radars, nine cameras and one IMU set.
It also deploys left-right mutual redundancy of the sensor suite. Based on three groups of sensors, even if one or two of them fail, the autonomous driving system can still ensure the normal operation of the perception module and will enable the vehicle to stop safely.
The lidars of each sensor group always rotate simultaneously in the same direction, to create a high degree of synchronization. This means that dislocation and ghosting of the point cloud can be avoided when there are dynamic objects in the vicinity. This also means that all the point cloud data can be collected and processed at the same time to maximize the use of all information.
Meanwhile, the system’s cameras are able to adapt automatically to changing environmental conditions. Using advanced software algorithms, the system can deal with either overexposure or underexposure under different light conditions and can solve the problem of smearing caused by motion blur while driving. For example, the camera specially designed to identify traffic lights can accurately identify the shape and color of traffic lights from 150m away at night.
Using seven surround-view 5MP cameras, QCraft has also expanded the vertical perception range by turning the cameras at a 90° angle, which can significantly reduce camera blind spots by more than 90%. The camera can distinguish small objects at a close range, such as cones or children. This also ensures consistency between the line-by-line exposure direction of the camera and the scanning direction of the lidar. It therefore improves the front fusion effect of cameras and lidars.
Finally, the computing platform of the QCraft solution includes a central computing unit, a backup computing unit and an onboard computing unit. Under normal circumstances, the central computing unit is responsible for processing the software. If this fails for any reason, the backup computing unit will take over vehicle control immediately and determine its movement. The redundancy design enables a vehicle’s protection mechanisms to pull over to the side of the road or brake during an emergency.