<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Detection on ViCoS Lab</title>
    <link>/tags/detection/</link>
    <description>Recent content in Detection on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/detection/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Towards automated scyphistoma census in underwater imagery: a useful research and monitoring tool</title>
      <link>/publications/vodopivec2018towards/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/vodopivec2018towards/</guid>
      <description>&lt;p&gt;Manual annotation and counting of entities in underwater photographs is common in many branches of marine biology. With a marked increase of jellyfish populations worldwide, understanding the dynamics of the polyp (scyphistoma) stage of their life-cycle is becoming increasingly important. In-situ studies of polyp population dynamics are scarce due to small size of the polyps and tedious manual work required to annotate and count large numbers of items in underwater photographs. We devised an experiment which shows a large variance between human annotators, as well as in annotations made by the same annotator. We have tackled this problem, which is present in many areas of marine biology, by developing a method for automated detection and counting. Our polyp counter (PoCo) uses a two-stage approach with a fast detector (Aggregated Channel Features) and a precise classifier consisting of a pre-trained Convolutional Neural Network and a Support Vector Machine. PoCo was tested on a year-long image dataset and performed with accuracy comparable to human annotators but with 70-fold reduction in time. The algorithm can be used in many marine biology applications, vastly reducing the amount of manual labor and enabling processing of much larger datasets. The source code is freely available on GitHub.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TransFusion – A Transparency-Based Diffusion Model for Anomaly Detection</title>
      <link>/publications/fucka2024transfusion/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/fucka2024transfusion/</guid>
      <description>&lt;p&gt;Surface anomaly detection is a vital component in manufacturing inspection. Current discriminative methods follow a two-stage architecture composed of a reconstructive network followed by a discriminative network that relies on the reconstruction output. Currently used reconstructive networks often produce poor reconstructions that either still contain anomalies or lack details in anomaly-free regions. Discriminative methods are robust to some reconstructive network failures, suggesting that the discriminative network learns a strong normal appearance signal that the reconstructive networks miss. We reformulate the two-stage architecture into a single-stage iterative process that allows the exchange of information between the reconstruction and localization. We propose a novel transparency-based diffusion process where the transparency of anomalous regions is progressively increased, restoring their normal appearance accurately while maintaining the appearance of anomaly-free regions using localization cues of previous steps. We implement the proposed process as TRANSparency DifFUSION (TransFusion), a novel discriminative anomaly detection method that achieves state-of-the-art performance on both the VisA and the MVTec AD datasets, with an image-level AUROC of 98.5% and 99.2%, respectively. Code: &lt;a href=&#34;https://github.com/MaticFuc/ECCV_TransFusion&#34;&gt;https://github.com/MaticFuc/ECCV_TransFusion&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Video segmentation of water scenes using semi supervised learning</title>
      <link>/publications/cesnik2021video/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/cesnik2021video/</guid>
      <description>&lt;p&gt;Obstacle detection is a crucial component in unmanned surface vehicles to prevent collisions and unnecessary stopping due to false detections. Autonomous vessels are a relatively unexplored area in comparison to autonomous ground vehicles, thus there are much fewer densely annotated datasets for training modern obstacle detectors. Since manual acquisition of ground truth segmentation data is time-consuming and expensive, a viable alternative is training with minimal supervision to&#xA;evaluate unsupervised domain adaptation methods, trained on a labeled source dataset and an un-labeled target dataset. Four modern adaptation methods are tested (Intra-domain adaptation, Fourier domain adaptation, Instance matching and Bidirectional learning) for training the semantic segmentation network WaSR, which is currently the state-of-the-art for maritime obstacle detection.&#xA;We consider the original WaSR as well as a modified version. The Fourier domain adaptation applied to a modified WaSR version outperforms the non-adapted original WaSR by 6.3% in F-measure.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
