DS Logo transparent

DeeperSense - Deep-Learning for Multimodal Sensor Fusion

The DeeperSense Challenge:

Underwater sensing is mostly acoustic sensing because cameras do not work in turbid waters with low visibility. 

DeeperSense adresses three typical use-cases for underwater sensing. 

UC 1: Diver Monitoring

Professional divers work under low visibility conditions. This makes it difficult for operation control to monitor their movements, state, and position.  Subsea robots equipped with cameras provide images, but only during good visibility. Acoustic sensors (SONAR), on the other hand, are work even in the dark, but are not very precise.  

UC2: Obstacle Recogition

Autonomous Underwater Vehicles (AUVs) need reliable and fast obstacle recognition to be able to fly safely though coral reefs and other complex submarine environments. Acoustic sensors are able to look far, but with low resolution. Visual sensors provide high resolution, but cannot look very far.    

UC3: Seabed Classification

Detailed maps of the sea-bed that inlcude information about topography, sediment types and benthic fauna and flora are important for the exploration and exploitation of the world’s oceans. Sea-bed classification with sonar is fast but needs to be calibrated with visual data to yield good results.      

The DeeperSense Solution:

DeeperSense develops solutions for improved underwater sensing by enabling different sensor modalities to learn from each other. 

Sound2Vision Algorithm

Sound2Vision enables a sonar to learn from a HD camera. Once trained, the sonar is able to produce camera-like images, even in very low visibility. 

EagleEye Algorithm

EagleEye combines the advantages of acoustic and visual obstacle detection. 

SmartSeaBottomScan

The SmartSeaBottomScan algorithm is used to teach an acoustic sensor (side scan sonar) how to classify sediments on the sea-floor. The training data used are based on classifications created on the basis of visual probes.

The DeeperSense Consortium