The deeper Sense Solution - Concept:
The underlying concept of DeeperSense is to combine the strengths and advantages of different sensor modalities by using state-of-the-art methods from Artificial Intelligence (Machine Learning) to improve non-visual robotic perception. The methods we develop in DeeperSense are generic and can be applied to all robot application areas. However, in this project, they are demonstrated and validated for on the domain of underwater service robots.
The DeeperSense concept is based on the concept of Inter-Sensoric Learning, in which a sensor modality B learns from sensor modality A so that sensor B can deliver output that is similar to that of sensor A, both in terms of accuracy and output type, or the interpretation of the data in Sensor A can be improved using data from sensor B. Sensor A provides high definition data but is sensitive to environmental conditions, whereas Sensor B is robust against environmental disturbances but provides lower resolution information.
By using data from one sensor modality to train other sensors, one can refine and improve the perception capabilities of the other sensors to, for example, provide feature-rich information to the control and decision-making algorithms of a robot.
In Deeper sense this concept will be realized in three core algorithms that will be developed to fulfil the needs of the use cases.
The algorithms are denominated as
Sound2Vision, EagleEye and SmartSeafloorScan and are described under the menu item Algorithms.