<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Mobile Robotics on ViCoS Lab</title>
    <link>/tags/mobile-robotics/</link>
    <description>Recent content in Mobile Robotics on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/mobile-robotics/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Adaptive Dynamic Window Approach for Local Navigation</title>
      <link>/publications/dobrevski2020adaptive/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/dobrevski2020adaptive/</guid>
      <description>&lt;p&gt;Local navigation is an essential ability of any mobile robot working in a real-world environment. One of the most commonly used methods for local navigation is the Dynamic Window Approach (DWA), which heavily depends on the settings of the parameters in its cost function. Since the optimal choice of the parameters depends on the environment that may significantly vary and change at any time, the parameters should be chosen dynamically in a data-driven way. To cope with this problem, we propose a novel deep convolutional neural network, which dynamically predicts these parameters considering the sensor readings. The network is trained using a state-of-the art reinforcement learning algorithm. In this way, we combine the power of data-driven learning and the dynamic model of the robot, enabling adaptation to the current environment as well as guaranteeing collision-free movement and smooth trajectories of the mobile robot. The experimental results show that the proposed method outperforms the DWA method as well as its recent extension.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deep reinforcement learning for map-less goal-driven robot navigation</title>
      <link>/publications/dobrevski2021deep/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/dobrevski2021deep/</guid>
      <description>&lt;p&gt;Mobile robots that operate in real-world environments need to be able to safely navigate their surroundings. Obstacle avoidance and path planning are crucial capabilities for achieving autonomy of such systems. However, for new or dynamic environments, navigation methods that rely on an explicit map of the environment can be impractical or even impossible to use. We present a new local navigation method for steering the robot to global goals without relying on an explicit map of the environment. The proposed navigation model is trained in a deep reinforcement learning framework based on Advantage Actor–Critic method and is able to directly translate robot observations to movement commands. We evaluate and compare the proposed navigation method with standard map-based approaches on several navigation scenarios in simulation and demonstrate that our method is able to navigate the robot also without the map or when the map gets corrupted, while the standard approaches fail. We also show that our method can be directly transferred to a real robot.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
