Lifted Semantic Graph Embedding for Omnidirectional Place Recognition

Authors:

Chao Zhang, Ignas Budvytis, Stephan Liwicki and Roberto Cipolla

Abstract:

Typical place recognition is dependent on the visual appearance and camera position of query images, without explicit use of domain knowledge and geometric relationships between key features in the scene. We exploit semantic grouping of pixels, and camera-pose robust scene graphs to perform structure-based visual localization for place recognition. In particular, we first formulate place recognition as an image retrieval task. Then, we lift the omnidirectional input images into 3D space, and compute a rotation and translation invariant semantic graph embedding to encode query and reference images. Finally, place information is obtained through graph similarity matching. Our graph representation is a simple addition to standard image embeddings with minimal overhead, but contains awareness of objects and their geometric relationships. In our experiments, we show improvement over typical place recognition, especially in environments with repetitions and dynamic appearance changes.

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors