– Improving non-visual sensing by learning from visual sensors:
With the Sound2Vision algorithm, we will develop an end-to-end technique that can learn mappings between camera images and SONAR data and that will use these mappings to generate high-resolution images from purely acoustic (SONAR) data. The Sound-to-Vision algorithm will be able to use acoustic data from sonars to create visual representations of objects and scenes.
These images can be interpreted by human operators, for example to monitor the status of a diver at work (as in UC1). However, the algorithm will be optimized to run in real-time on-board the CPU of an underwater vehicle, so that the images can also be available to the mission- and path planning algorithms an AUV or a hybrid ROV.
– Multi-Modal Combination of Sensor Data from Different Viewpoints:
The final objective of the EagleEye algorithm is enhanced object detection, i.e., given objects detected by the FLS, the algorithm will be trained to find, in real-time, the corresponding object in the FLC image.
This detection will allow the AUV to understand the visual scene ahead in real-time, enabling a more delicate navigation and obstacle avoidance scheme, and thus enabling the mapping of complex areas and fragile ecosystems, such as coral reefs.
– Automated Real-Time Seafloor Classification with Sidescan Sonar:
The SmartSeafloorScan algorithm will combine data from different sources creating a new pipeline for seabed classification based on a novel inter-sensoric approach that has not been yet explored.
To exploit the richness and detail of the optical data, deep learning techniques will be developed and adapted to produce the ground truth for creating a new classifier that applies deep-learning as well and uses acoustic data as the only input and be able to work in real-time.
This approach will address the longstanding problems of having too few labelled data for training of deep-learning algorithms and for having a method for seabed classification on-the-fly with an AUV.