Sensing, Perception, and Navigation in Space Robotics
Our research focuses on advancing the field of space robotics through the development of cutting-edge methods for robot navigation, encompassing visual and LiDAR Simultanous Localization and Mapping (SLAM). We place significant emphasis on the metrological characterization of these methods to ensure their reliability and efficacy in space related applications. A notable aspect of our work involves multi-sensor fusion, such LIDAR-visual SLAM as an example.
In the ever-evolving landscape of autonomous robotics, mapping hazards in unstructured environments is paramount. To address this challenge, our team has devised mapping algorithm utilizing occupancy grids, augmented by the utilization of pre-trained neural networks for image analysis.
Furthermore, a significant portion of our research is dedicated to enhancing position and orientation measurement methods tailored specifically for embedded systems. In this regard, we have directed our efforts towards the development and testing of solutions that seamlessly integrate with the Robot Operating System (ROS), which stands as one of the most widely utilized frameworks in the realm of robotics today. Our aim is to ensure compatibility and ease of implementation, thereby facilitating the adoption of our methodologies across diverse robotic platforms.
LiDAR-SLAM algorithm tested onboard the MORPHEUS rover for its navigation
Collaborating closely with industrial partners such as ALTEC S.p.A within the ExoMars program, we have developed methods for precise rover positioning and attitude estimation. Our suite of methods encompasses a wide range of techniques, including sun-based attitude and absolute localization, landmark-based localization, skyline absolute localization, and the innovative WISDOM Grid Relative Localization. This novel method enables the automatic selection of optimal viewpoints, thereby ensuring accurate trajectory correction through onboard Guidance, Navigation, and Control (GNC) systems.