- University of Ljubljana
- Faculty of Computer and Information Science
- Faculty of Electrical Engineering
- Sirio d.o.o.
- ARRS (J2-2506)
A crucial element for autonomous operation is environment perception, which still lags far behind the control and hardware research. The perception capability is additionally limited by the physical constraints of small-sized USVs, which prohibits the use of heavy, power consuming sensors. Cameras as light-weight, low-power and information rich sensors have attracted considerable attention on their own and in combination with other modalities like LIDAR and RADAR.
In a closely related field of autonomous vehicles (AV), recent perception advancements have been primarily driven by the deep learning paradigm. The paradigm allows unification of individual perception tasks, leading to substantial improvements of individual tasks. However, SOTA deep models developed for AVs underperform in maritime environment even if they are re-trained on a large maritime dataset. New deep maritime-specific architectures are thus required that would allow adaptation to the highly dynamic maritime environment and to allow low-effor deployment of USVs trained on one maritime scene to another.
The project’s overarching goal is to develop the next-generation maritime environment perception methods, which will harvest the power of end-to-end trainable deep models. The models will address the challenges essential for safe USV operation like general obstacle detection, long-term tracking with re-identification, implicit detection of hazardous areas and sensor fusion for improved detection. Particular focus will be placed on the adaptivity of the models and self-supervised tuning to new environments. New multisensor datasets are planned to be recorded to facilitate this research.
The work is divided into six work packages:
- Deep models for robust obstacle detection with scene adaptation capabilities (WP1).
- Segmentation-based tracking algorithms compatible with the deep obstacle detection architectures (WP2).
- Deep trainable multimodal methods for environment perception (WP3).
- Annotated multimodal USV datasets for training and objective evaluation of deep networks in realistic scenarios (WP4).
- Work packages WP5 and WP6 contain support activities such as results dissemination and project management.
- Year 1: Activities on work packages WP1, WP2, WP4, WP5, WP6
- Year 2: Activities on work packages WP1, WP2, WP3, WP4, WP5, WP6
- Year 3: Activities on work packages WP1, WP3, WP4, WP5, WP6