L2D2: Learnable Line Detector and Descriptor

Authors:

Hichem Abdellali, Robert Frohlich, Viktor Vilagos and Zoltan Kato

Abstract:

A novel learnable line segment detector and descriptor is proposed which allows efficient extraction and matching of 2D lines via the angular distance of 128 dimensional unit descriptor vectors. While many handcrafted and deep features have been proposed for keypoints, only a few methods exist for line segments. It is well known, however, that line segments are commonly found in man-made environments, in particular urban scenes, thus they are important for applications like pose estimation, visual odometry, or 3D reconstruction. Our method relies on a 2-stage deep convolutional neural network architecture: In stage 1, candidate 2D line segments are detected, and in stage 2, a descriptor is generated for the extracted lines. The network is trained in a self-supervised way using an automatically collected dataset of matching and non-matching line segments across (substantially) different views of 3D lines. Experimental results confirm the state of the art performance of the proposed L2D2 network on two well-known datasets for autonomous driving both in terms of detected line matches as well as when used for line-based camera pose estimation and tracking.

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors