<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Maritime Obstacle Detection on ViCoS Lab</title>
    <link>/tags/maritime-obstacle-detection/</link>
    <description>Recent content in Maritime Obstacle Detection on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/maritime-obstacle-detection/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>1st Workshop on Maritime Computer Vision (MaCVi) 2023: Challenge Results</title>
      <link>/publications/kiefer20231st/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/kiefer20231st/</guid>
      <description>&lt;p&gt;The 1st Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at &lt;a href=&#34;https://seadronessee.cs.uni-tuebingen.de/macvi&#34;&gt;https://seadronessee.cs.uni-tuebingen.de/macvi&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>2nd Workshop on Maritime Computer Vision (MaCVi) 2024: Challenge Results</title>
      <link>/publications/kiefer20242nd/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/kiefer20242nd/</guid>
      <description>&lt;p&gt;The 2nd Workshop on Maritime Computer Vision (MaCVi)&#xA;2024 addresses maritime computer vision for Unmanned&#xA;Aerial Vehicles (UAV) and Unmanned Surface Vehicles&#xA;(USV). Three challenges categories are considered: (i)&#xA;UAV-based Maritime Object Tracking with Re-identification,&#xA;(ii) USV-based Maritime Obstacle Segmentation and De-&#xA;tection, (iii) USV-based Maritime Boat Tracking. The&#xA;USV-based Maritime Obstacle Segmentation and Detec-&#xA;tion features three sub-challenges, including a new em-&#xA;bedded challenge addressing efficicent inference on real-&#xA;world embedded devices. This report offers a comprehen-&#xA;sive overview of the findings from the challenges. We pro-&#xA;vide both statistical and qualitative analyses, evaluating&#xA;trends from over 195 submissions. All datasets, evaluation&#xA;code, and the leaderboard are available to the public at&#xA;&lt;a href=&#34;https://macvi.org/workshop/macvi24&#34;&gt;https://macvi.org/workshop/macvi24&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>eWaSR — An Embedded-Compute-Ready Maritime Obstacle Detection Network</title>
      <link>/publications/tersek2023ewasr/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/tersek2023ewasr/</guid>
      <description>&lt;p&gt;Maritime obstacle detection is critical for safe navigation of autonomous surface vehicles (ASVs). While the accuracy of image-based detection methods has advanced substantially, their computational and memory requirements prohibit deployment on embedded devices. In this paper, we analyze the current best-performing maritime obstacle detection network, WaSR. Based on the analysis, we then propose replacements for the most computationally intensive stages and propose its embedded-compute-ready variant, eWaSR. In particular, the new design follows the most recent advancements of transformer-based lightweight networks. eWaSR achieves comparable detection results to state-of-the-art WaSR with only a 0.52% F1 score performance drop and outperforms other state-of-the-art embedded-ready architectures by over 9.74% in F1 score. On a standard GPU, eWaSR runs 10× faster than the original WaSR (115 FPS vs. 11 FPS). Tests on a real embedded sensor OAK-D show that, while WaSR cannot run due to memory restrictions, eWaSR runs comfortably at 5.5 FPS. This makes eWaSR the first practical embedded-compute-ready maritime obstacle detection network. The source code and trained eWaSR models are publicly available.&lt;/p&gt;</description>
    </item>
    <item>
      <title>LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and Benchmark</title>
      <link>/publications/zust2023lars/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/zust2023lars/</guid>
      <description>&lt;p&gt;The progress in maritime obstacle detection is hindered by the lack of a diverse dataset that adequately captures the complexity of general maritime environments. We present the first maritime panoptic obstacle detection benchmark LaRS, featuring scenes from Lakes, Rivers and Seas. Our major contribution is the new dataset, which boasts the largest diversity in recording locations, scene types, obstacle classes, and acquisition conditions among the related datasets. LaRS is composed of over 4000 per-pixel labeled key frames with nine preceding frames to allow utilization of the temporal texture, amounting to over 40k frames. Each key frame is annotated with 8 thing, 3 stuff classes and 19 global scene attributes. We report the results of 27 semantic and panoptic segmentation methods, along with several performance insights and future research directions. To enable objective evaluation, we have implemented an online evaluation server. The LaRS dataset, evaluation toolkit and benchmark are publicly available at: &lt;a href=&#34;https://lojzezust.github.io/lars-dataset&#34;&gt;https://lojzezust.github.io/lars-dataset&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Learning Maritime Obstacle Detection from Weak Annotations by Scaffolding</title>
      <link>/publications/zust2022learning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/zust2022learning/</guid>
      <description>&lt;p&gt;Coastal water autonomous boats rely on robust perception methods for obstacle detection and timely collision avoidance. The current state-of-the-art is based on deep segmentation networks trained on large datasets. Per-pixel ground truth labeling of such datasets, however, is labor-intensive and expensive. We observe that far less information is required for practical obstacle avoidance &amp;ndash; the location of water edge on static obstacles like shore and approximate location and bounds of dynamic obstacles in the water is sufficient to plan a reaction.&#xA;We propose a new scaffolding learning regime (SLR) that allows training obstacle detection segmentation networks only from such weak annotations, thus significantly reducing the cost of ground-truth labeling. Experiments show that maritime obstacle segmentation networks trained using SLR substantially outperform the same networks trained with dense ground truth labels. Thus accuracy is not sacrificed for labelling simplicity but is in fact improved, which is a remarkable result.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
