Projects

Current projects

The project aims at a holistic approach towards learning, detection and recognition / categorisation of the visual motion and the phenomena derived from it. The approach is based on a novel and powerful paradigm of learning multi­layer compositional hierarchies. While individual ingredients, such as the hierarchical processing, compositionality and incremental learning, have already been subjects of a research, they have, to the best of our knowledge, never been treated in a unified motion­related framework. Such a framework is crucial for robustness, versatility, ease of learning and inference, generalisation, real­time performance, transfer of the knowledge, and scalability for a variety of cognitive vision tasks. ARRS Project. Project duration: 2011 - 2014.
The research group is involved in basic research in computer vision, with emphasis on visually enabled cognitive systems involving visual learning and recognition. Topics include recognition and tracking of objects, scenes, and activities in visual cognitive tasks such as smart vision-based detection and positioning using wearable computing as well as for mobile robots and cognitive assistants.
Project duration: 2009-2014.

Past projects

Our challenge is to develop a methodology that would bridge the gap between the computer-centered low-level image features and the high-level human-centered semantic meanings.
ARRS project (J2-3607)
Project duration: 2010 - 2013.
The high level aim of this project was to develop a unified theory of self-understanding and self-extension with a convincing instantiation and implementation of this theory in a robot. By self-understanding we mean that the robot has representations of gaps in its knowledge or uncertainty in its beliefs. By self-extension we mean the ability of the robot to extend its own abilities or knowledge by planning learning activities and carrying them out. The project involved six universities and about 30 researchers.
POETICON is a research project in the Seventh framework programme, that explores the “poetics of everyday life”, i.e. the synthesis of sensorimotor representations and natural language in everyday human interaction.
We are developing methods for image interpretation on mobile platforms with the specific aim of direct interaction with smart objects.
The MOBVIS project identified the key issue for the realisation of smart mobile vision services to be the application of context to solve otherwise intractable vision tasks. In order to achieve this challenging goal, MOBVIS claimed that three components, (1) multi-modal context awareness, (2) vision based object recognition, and (3) intelligent map technology, should be combined for the first time into a completely innovative system - the attentive interface.
Visiontrain Project addressed the problem of understanding vision from both computational and cognitive points of view. The research approach was based on formal mathematical models and on the thorough experimental validation of these models. 11 academic partners worked cooperatively on a number of targeted research objectives: (i) computational theories and methods for low-level vision, (ii) motion understanding from image sequences, (iii) learning and recognition of shapes, objects, and categories, (iv) cognitive modelling of the action of seeing, and (v) functional imaging for observing and modelling brain activity.
The main goal of the project was to advance the science of cognitive systems through a multi-disciplinary investigation of requirements, design options and trade-offs for human-like, autonomous, integrated, physical (eg., robot) systems, including requirements for architectures, for forms of representation, for perceptual mechanisms, for learning, planning, reasoning and motivation, for action and communication.
The main objective of the EU FP5 project CogVis was to provide the methods and techniques that enable construction of vision systems that can perform task oriented categorization and recognition of objects and events in the context of an embodied agent.