Vision-based navigation

Vision-based proximity navigation between satellites

In recent years, there has been an increasing interest in utilizing Artificial Intelligence algorithms for space applications. Among the spectrum of Machine Learning techniques available, Convolutional Neural Networks (CNNs) emerge as a well-suited deep learning approach for optimizing Guidance, Navigation, and Control systems during close-proximity operations between satellites. 

Our research group has been developing and validating a pipeline for satellite relative navigation based on computer vision algorithms with the aim of computing the measurement vector used in the navigation filter (Extended Kalman Filter) to estimate the relative motion between a chaser satellite hosting a stereo camera and an uncooperative target satellite.

Experimental setup used to validate neural networks for proximity navigation between satellites.

Our research group is focused on the following activities:

    • training of state-of-the-art CNNs (e.g., YOLOv7) with both datasets available on the web (pretraining with SPEED and/or COCO) and two datasets, one for object detection and the other for segmentation, realized in laboratory using the facility SPARTANS and data augmentation methods;
    • Development and use of Image Analysis algorithms (e.g., ORB) for key point detection and relative pose estimation between uncooperative satellites;
    • Test and validation of Object Detection and Segmentation CNNs using the representative facility SPARTANS with a 2 unit cubesat target mockup and a chaser mockup.

    (LEFT) the target satellite mock-up identified by the CNN (violet box) with key-points (red crosses) used for relative motion estimation. The ROI, the violet box, is the output of the object detection task. (RIGHT): output of the segmentation task for recognizing the target mock-up contour.