ADAS & Autonomous Vehicle International
  • News
    • A-L
      • ADAS
      • AI & Sensor Fusion
      • Business
      • Connectivity
      • Cybersecurity
      • Expo
      • HMI
      • Last-mile delivery
      • Legislation & Standards
      • Localization/GNSS
    • M-Z
      • Mapping
      • Off-Highway
      • Robo-Taxis
      • Sensors
      • Shared Mobility
      • Safety
      • Simulation
      • Testing
      • Trucks
      • V2X
  • Features
  • Online Magazines
    • January 2025
    • September 2024
    • April 2024
    • January 2024
    • Subscribe
  • Opinion
  • Videos
  • Supplier Spotlight
  • Events
LinkedIn Facebook Twitter
  • Automotive Interiors
  • Automotive Testing
  • Automotive Powertrain
  • Professional Motorsport
  • Tire Technology
  • Media Pack
    • 2026 Media Pack
    • 2025 Media Pack
LinkedIn Facebook
Subscribe
ADAS & Autonomous Vehicle International
  • News
      • ADAS
      • AI & Sensor Fusion
      • Business
      • Connectivity
      • Cybersecurity
      • Expo
      • HMI
      • Last-mile delivery
      • Legislation & Standards
      • Localization/GNSS
      • Mapping
      • Off-Highway
      • Robo-Taxis
      • Sensors
      • Shared Mobility
      • Safety
      • Simulation
      • Testing
      • Trucks
      • V2X
  • Features
  • Online Magazines
    1. April 2025
    2. January 2025
    3. September 2024
    4. April 2024
    5. January 2024
    6. Subscribe
    Featured
    April 15, 2025

    In this Issue – April 2025

    Online Magazines By Web Team
    Recent

    In this Issue – April 2025

    April 15, 2025

    In this Issue – January 2025

    November 29, 2024

    In this Issue – September 2024

    July 23, 2024
  • Opinion
  • Videos
  • Supplier Spotlight
  • Events
  • Awards
    • About
    • 2025 winners
    • Judges
  • Webinars
LinkedIn Facebook
Subscribe
ADAS & Autonomous Vehicle International
AI & Sensor Fusion

Ritsumeikan University research team develop Dynamic Point-Pixel Feature Alignment Network to improve 3D object detection

Anthony JamesBy Anthony JamesJanuary 9, 20244 Mins Read
Share
LinkedIn Twitter Facebook Email

Researchers at Japan’s Ritsumeikan University have developed a network that combines 3D lidar and 2D image data to enable a more robust detection of small objects.

Robots and autonomous vehicles can use 3D point clouds from lidar sensors and camera images to perform 3D object detection. However, current techniques that combine both types of data struggle to accurately detect small objects. Now, researchers from Japan have developed DPPFA−Net, an innovative network that overcomes challenges related to occlusion and noise introduced by adverse weather. Their findings will pave the way for more perceptive and capable autonomous systems.

Robotics and autonomous vehicles are among the most rapidly growing domains in the technological landscape, with the potential to make work and transportation safer and more efficient. Since both robots and self-driving cars need to accurately perceive their surroundings, 3D object detection methods are an active area of study. Most 3D object detection methods employ lidar sensors to create 3D point clouds of their environment. Simply put, lidar sensors use laser beams to rapidly scan and measure the distances of objects and surfaces around the source. However, using lidar data alone can lead to errors due to the high sensitivity of lidar to noise, especially in adverse weather conditions like during rainfall.

To tackle this issue, scientists have developed multi-modal 3D object detection methods that combine 3D lidar data with 2D RGB images taken by standard cameras. While the fusion of 2D images and 3D lidar data leads to more accurate 3D detection results, it still faces its own set of challenges, with accurate detection of small objects remaining difficult. The problem mainly lies in properly aligning the semantic information extracted independently from the 2D and 3D datasets, which is hard due to issues such as imprecise calibration or occlusion.

Against this backdrop, a research team led by Professor Hiroyuki Tomiyama from Ritsumeikan University, Japan, has developed an innovative approach to make multi-modal 3D object detection more accurate and robust. The proposed scheme, called ‘Dynamic Point-Pixel Feature Alignment Network’ (DPPFA−Net), is described in its paper published in IEEE Internet of Things Journal on 3 November 2023.

The model comprises an arrangement of multiple instances of three novel modules: the Memory-based Point-Pixel Fusion (MPPF) module, the Deformable Point-Pixel Fusion (DPPF) module, and the Semantic Alignment Evaluator (SAE) module. The MPPF module is tasked with performing explicit interactions between intra-modal features (2D with 2D and 3D with 3D) and cross-modal features (2D with 3D). The use of the 2D image as a memory bank reduces the difficulty in network learning and makes the system more robust against noise in 3D point clouds. Moreover, it promotes the use of more comprehensive and discriminative features.

In contrast, the DPPF module performs interactions only at pixels in key positions, which are determined via a smart sampling strategy. This allows for feature fusion in high resolutions at a low computational complexity. Finally, the SAE module helps ensure semantic alignment between both data representations during the fusion process, which mitigates the issue of feature ambiguity.

The researchers tested DPPFA−Net by comparing it to the top performers for the widely used KITTI Vision Benchmark. Notably, the proposed network achieved average precision improvements as high as 7.18% under different noise conditions. To further test the capabilities of their model, the team created a new noisy dataset by introducing artificial multi-modal noise in the form of rainfall to the KITTI dataset. The research team say the results show that the proposed network performed better than existing models not only in the face of severe occlusions but also under various levels of adverse weather conditions. “Our extensive experiments on the KITTI dataset and challenging multi-modal noisy cases reveal that DPPFA-Net reaches a new state-of-the-art,” remarked Prof. Tomiyama.

Notably, there are various ways in which accurate 3D object detection methods could improve our lives. Self-driving cars, which rely on such techniques, have the potential to reduce accidents and improve traffic flow and safety. Furthermore, the implications in the field of robotics should not be understated. “Our study could facilitate a better understanding and adaptation of robots to their working environments, allowing a more precise perception of small targets,” explained Prof. Tomiyama. “Such advancements will help improve the capabilities of robots in various applications.”

Another use for 3D object detection networks is the pre-labeling of raw data for deep-learning perception systems. This would greatly reduce the cost of manual annotation, accelerating developments in the field, according to the team.

 

Share. Twitter LinkedIn Facebook Email
Previous ArticleHere Technologies continues to support Uber on mapping and geolocation
Next Article Continental and Aurora finalize scalable autonomous trucking system design

Related Posts

ADAS

Nvidia Drive full-stack autonomous vehicle software rolls out

June 13, 20253 Mins Read
Testing

Tier IV launches autonomous test vehicle development kit

June 13, 20252 Mins Read
Software

Stradvision collaborates with Arm to accelerate the future AI-defined vehicle development

June 9, 20253 Mins Read
Latest News

WeRide collaborates with RTA and Uber to launch pilot operations

June 16, 2025

Aurrigo founder David Keene receives MBE for the decarbonization of airports

June 13, 2025

WATCH NOW: Driving performance, efficiency and reliability – material solutions for vehicle domain controllers

June 13, 2025
FREE WEEKLY E-NEWSLETTER

Receive breaking stories and features in your inbox each week, for free


Enter your email address:


Our Social Channels
  • Facebook
  • LinkedIn
Getting in Touch
  • Free Weekly E-Newsletters
  • Meet the Editors
  • Contact Us
  • Media Pack
    • 2026 Media Pack
    • 2025 Media Pack
RELATED UKI TOPICS
  • Automotive Interiors
  • Automotive Testing
  • Automotive Powertrain
  • Professional Motorsport
  • Tire Technology
  • Media Pack
    • 2026 Media Pack
    • 2025 Media Pack
© 2025 UKi Media & Events a division of UKIP Media & Events Ltd
  • Terms and Conditions
  • Privacy Policy
  • Cookie Policy
  • Notice & Takedown Policy
  • Site FAQs

Type above and press Enter to search. Press Esc to cancel.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

SAVE & ACCEPT