# Incremental subspace learning of invariant representations

Most of generative learning methods can be adapted to learn novel data by an incremental updating step. However, as the parameters of the learned model change, the representation of the original data also changes. If the original data was discarded (which is usually true in online learning), the new representation has to be calculated in a way that minimizes information loss. We introduce a method for an incremental updating of the popular subspace representation calculated by incremental Principal Component Analysis (IPCA). We also developed a method for efficient PCA of rotated templates without the decomposition of the covariance matrix. To achieve invariance to illumination changes, we applied the robust retrieval of coefficients using a set of basis images filtered with steering filters - this technique preserves the informative power of the basis set while achieving significant robustness of localization even under severe changes in illumination.