Roland Perko, PhD
A wide range of algorithms have been proposed to detect objects in still images. However, most of the current approaches are purely based on local appearance and ignore the context in which these objects are embedded. Within our research on visual context we propose a general approach to extract, learn and use contextual information from images to increase the performance of classical object detection methods.
We explore two modes of positioning in a challenging real world scenario: single snapshot based positioning, improved by a novel highdimensional feature matching method, and continuous positioning enabled by combination of snapshot and incremental positioning. Quite interestingly, vision enables localization accuracies comparable to GPS.
While the traditional goal of image segmentation is to provide a figure/ground segmentation for object recognition or semantic segmentation to assist humans, we propose to use image segmentation in order to boost performance of local invariant feature detectors. In particular, we analyze the performance of MSER feature detector and we show that we can prune all features detected on vegetation to gain a 67% speed-up while accuracy of image matching does not decrease.