<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>3-DOF on ViCoS Lab</title>
    <link>/tags/3-dof/</link>
    <description>Recent content in 3-DOF on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/3-dof/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Center Direction Network for Grasping Point Localization on Cloths</title>
      <link>/publications/tabernik2024center/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/tabernik2024center/</guid>
      <description>&lt;p&gt;Object grasping is a fundamental challenge in robotics and computer vision, critical for advancing robotic manipulation capabilities. Deformable objects, like fabrics and cloths, pose additional challenges due to their non-rigid nature. In this work, we introduce CeDiRNet-3DoF, a deep-learning model for grasp point detection, with a particular focus on cloth objects. CeDiRNet-3DoF employs center direction regression alongside a localization network, attaining first place in the perception task of ICRA 2023&amp;rsquo;s Cloth Manipulation Challenge. Recognizing the lack of standardized benchmarks in the literature that hinder effective method comparison, we present the ViCoS {Towel} Dataset. This extensive benchmark dataset comprises 8,000 real and 12,000 synthetic images, serving as a robust resource for training and evaluating contemporary data-driven deep-learning approaches. Extensive evaluation revealed CeDiRNet-3DoF&amp;rsquo;s robustness in real-world performance, outperforming state-of-the-art methods, including the latest transformer-based models. Our work bridges a crucial gap, offering a robust solution and benchmark for cloth grasping in computer vision and robotics. Code and dataset are available at: &lt;a href=&#34;https://github.com/vicoslab/CeDiRNet-3DoF&#34;&gt;https://github.com/vicoslab/CeDiRNet-3DoF&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Lokalizacija in ocenjevanje lege predmeta v treh prostostnih stopnjah s središčnimi smernimi vektorji</title>
      <link>/publications/tabernik2023lokalizacija/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/tabernik2023lokalizacija/</guid>
      <description>&lt;p&gt;In this paper, we propose an approach to localize and estimate the pose of objects in three degrees of freedom (3-DOF). Our method is based on point localization combined with regression of the&#xA;orientation angle for each detected object. We extend existing point localization method to estimate the orientation of all detected objects in an image. The orientation regression is parameterized with trigonometric functions, similar to the direction to the object center. We evaluate our method on the proposed screw dataset, composed of a training set containing synthetic images with photorealistic appearance and a test set containing real images of screws. Compared to the state-of-the-art 6-DOF&#xA;position estimation method applied to the 3-DOF problem, our approach achieves comparable results at a significantly lower computational cost.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
