Lifted Semantic Graph Embedding for Omnidirectional Place Recognition |
---|
Authors: Chao Zhang, Ignas Budvytis, Stephan Liwicki and Roberto Cipolla |
Abstract: Typical place recognition is dependent on the visual appearance and camera position of query images, without explicit use of domain knowledge and geometric relationships between key features in the scene. We exploit semantic grouping of pixels, and camera-pose robust scene graphs to perform structure-based visual localization for place recognition. In particular, we first formulate place recognition as an image retrieval task. Then, we lift the omnidirectional input images into 3D space, and compute a rotation and translation invariant semantic graph embedding to encode query and reference images. Finally, place information is obtained through graph similarity matching. Our graph representation is a simple addition to standard image embeddings with minimal overhead, but contains awareness of objects and their geometric relationships. In our experiments, we show improvement over typical place recognition, especially in environments with repetitions and dynamic appearance changes. |
PDF (protected) |