Research/Context aware object detection

A wide range of algorithms have been proposed to detect objects in still images. However, most of the current approaches are purely based on local appearance and ignore the context in which these objects are embedded. Within our research on visual context we propose a general approach to extract, learn and use contextual information from images to increase the performance of classical object detection methods. The important properties of the proposed approach are that it can be combined with any existing object detection method and it provides a general framework not limited to one specific object category.

File:Perko research context workflow.png

Figure: Novel concept of context awareness for object detection for the example of pedestrian detection: Context probability maps are extracted from the input image, from which a context confidence score is estimated for one specific object category. In parallel a standard object recognition method is used to detect this object category. In the final step both results are fused improving object detection accuracy.

Publications

Related publications

Videos

These videos show the priors for pedestrian occurrence for short sequences captured in Ljubljana center. Each video shows the input frame, the feature maps for geometry and texture, and priors based on geometry, on texture and on both feature maps.

  • This short sequence shows the robustness of our approach to in-plane rotation and tilting of the camera. It can be seen that the prior based on geometry is rather unstable, whereas the prior based on texture gives a very accurate focus of attention all the time. video (avi) video (mpeg) video (divX)
  • For this video I was walking through Copova ulica, a pedestrian zone in Ljubljana center. The priors for pedestrian occurrence really gives regions with pedestrians if there are any in the current frame. Again the prior based on texture features is more stable than the one based on geometry. video (avi) video (mpeg) video (divX)