Sensor Fusion Driving ADAS Advances

Sensor fusion, the merging of data from various sensors or sources to create accurate and reliable information, enhances the previous reliance on individual sensor data. With such advances as autonomous systems, single sensors can’t approach the value of sensor fusion, especially when fast, critical decisions or actions are necessary.

Sensor fusion is driving innovations in smart devices across multiple applications and is reducing cost, complexity, and the number of components used—all with increased accuracy.

Prime Example – ADAS

One of the best examples of sensor fusion is the enabling advanced driver assistance systems (ADAS) and autonomous vehicles. Data integration from sensors like radar, LiDAR, cameras, and others allows vehicles to make decisions and improve safety and efficiency.

Strategically mounted sensors on the vehicle each have their role. Cameras, for example, are used for visual imagery, LiDAR measures distance, and RADAR detects objects and their speed. The raw data gathered is then combined and processed. Machine learning functions include detecting free driving space, seeing obstructions, and making predictions. Data points are segmented, and similar ones are clustered to identify categories, including other vehicles, traffic signs, and pedestrians. The system then decides which objects are relevant. It constantly monitors objects, and its planning capabilities come into play. When the control module takes over, it sends precise commands to vehicle systems. The process emulates human sensory and cognitive functions, enhanced with sensor fusion technologies.

Using neural networks in sensor fusion for autonomous vehicles is a significant development. Separate neural networks process each sensor’s signal and combine the individual outputs into a higher-level neural network. Neural networks are becoming smaller and more specialized, yet they are powerful enough to process complex sensor data.

Occupancy grid mapping is used in the navigation and localization of autonomous vehicles, especially in dynamic and complex environments. The process creates a grid-based representation of the environment; each cell in the grid shows if a particular area is occupied, free, or unknown. Sensor fusion is pivotal in achieving this.

Moving object detection and tracking is crucial. Autonomous vehicles usually employ a comprehensive sensor suite that contributes unique and valuable data. Moving object detection and tracking has typically involved fusing sensor data and integrating it with insights from a Simultaneous Localization and Mapping (SLAM) system. However, improved strategies now include initial detection using RADAR and LiDAR data, then funneling regions of interest from LiDAR point clouds into camera-based classifiers. In this way, the tech maximizes each sensor’s strength.

Sensor fusion involves a range–Low-level fusion typically handles the preliminary merging of RADAR and LiDAR data, focusing on aspects like localization and mapping without delving into detailed feature extraction or object identification. High-level fusion, in comparison, incorporates camera inputs, bringing detailed detection and classification into the mix.

Challenges exist, such as sensor failure based on hardware defects or environmental conditions. Integrating multiple complementary sensors will overcome the limitations of individual sensors when operating independently. Challenges also relate to integrating data from sensors with different substrates or technologies, such as using silicon substrates with sensors using gallium arsenide requires design approaches for seamless data fusion. Non-MEMS sensors like millimeter-wave radar are under study for their high precision and unique functionalities.

The future includes developing new algorithms for consistent environmental context detection to enhance obstacle detection and overall vehicle safety. A key development area is the balance between edge computing capabilities and sensor fusion. Edge computing enables data processing near the sensor rather than relaying data to a central processor. This reduces latency, enabling near-real-time processing. Ultimately, the challenge is to manage cost and performance tradeoffs while ensuring long-term maintenance and servicing of the software over the vehicle’s lifespan.

In 2023, a Sensor Fusion Development Kit launched by TIER IV made it easier for engineers to integrate and process sensor data and streamline the development of these systems.

Announcements Continue

Sensor fusion is the topic of today’s headlines. For example, Boeing will demonstrate sensor fusion technology that could enhance military situational awareness by combining data from airborne and space-based sensors. The fusion of sensor data could be delivered to operators on the ground or in cockpits, leveraging data from the E-7 command-and-control aircraft that Boeing makes for the U.S. Air Force and data from missile-tracking satellites being developed by Boeing’s subsidiary, Millennium Space, for the U.S. Space Force. This addresses a longstanding challenge the military faces: delivering timely and relevant data to operational units. The company also plans to integrate medium Earth orbit (MEO) missile-warning satellites that Millennium is building under a $500 million contract with the U.S. Space Force.

Lattice Semiconductor also just announced a new 3D sensor fusion reference design to accelerate advanced autonomous application development. Combining a low power, low latency, deterministic Lattice Avant-E FPGA with Lumotive’s Light Control Metasurface (LCM) programmable optical beamforming technology, the reference design enables enhanced perception, robust reliability, and simplified autonomous decision-making in complex environments, including Industrial robotics, Automotive, and Smart City Infrastructure.

Features of the new sensor fusion reference design include class-leading perception using low-power edge AI inferencing based on Lattice FPGA solutions, enabling sensor fusion processing and synchronization for all sensors, combining LiDAR, radar, and camera sensors, and advanced solid-state LiDAR with beam steering powered by Lumotive’s LCM technology, delivering superior 3D sensing.

Leave A Reply

Your email address will not be published.