Humans perceive our visual surroundings through the projection of light rays through our pupils and onto the retina. Aided by motion, we gain an understanding of our environment, as well as our location within it. The goal of image-based 3D reconstruction is to imbue machines with similar capabilities. The most prominent paradigm for image-based 3D reconstruction is called Structure-from-Motion (SfM). Traditionally, SfM has been approached through handcrafted algorithms, which are brittle when assumptions do not hold. Humans, on the other hand, understand their environment intuitively and show remarkable robustness in their ability to localize themselves in, and map the world.
The main purpose of this thesis is the development of a set of methods which strives toward the next generation of SfM, imbued with intelligence and robustness. In particular, we propose a set of methods dealing with 2D: learning of keypoint detectors, features, and dense feature matching, and 3D: threshold-robust relative pose estimation, and registration of SfM maps.
First, we develop models to detect keypoints, producing a set of 2D image coordinates, and models to describe the image, producing features. One of our key contributions is decoupling these tasks, which have typically been learned jointly, into distinct objectives, resulting in major gains in performance, as well as increased modularity. Paper A introduces this decoupled framework, and Paper B further develops the keypoint objective. In Paper C we revisit the keypoint objective from an entirely self-supervised reinforcement learning perspective, yielding several insights, and further gains in performance.
We further develop methods for dense feature matching, i.e., matching every pixel between two images. In Paper D we propose the first dense feature matcher capable of outperforming sparse matching for relative pose estimation. This is significant, as previous work had generally indicated that the sparse or semi-dense paradigm was preferable. In Paper E we greatly improve on almost all components of the method of Paper D, resulting in an extremely robust dense matcher, capable of matching almost any pair of images.
We lift our eyes from the 2D image plane into 3D, and investigate relative pose estimation and 3D registration of SfM maps. Relative pose estimation is a difficult task, as non-robust estimation fails in the presence of outliers. Random Sample Consensus (RANSAC), which is the goldstandard robust estimation method, requires setting an outlier threshold, which is non-trivial to set, and poor choices result in significantly worse performance. In Paper F, we develop an algorithm to automatically estimate this threshold from an initial guess that is less biased than previous approaches, leading to robust performance.
Finally, we investigate registering SfM maps together. This is particularly interesting in distributed settings where, e.g., robots need to localize with respect to each other’s reference frames in order to collaborate. However, in this setting, using image-based localization approaches comes with downsides. In particular, computational complexity, compatibility issues, and privacy concerns severely limit the potential of such systems to be deployed. In Paper G we propose a new paradigm for registering SfM maps through point cloud registration, circumventing the above limitations. Finding that existing registration models trained on 3D scan data fail on this task, we develop a dataset for SfM registration. Training on our proposed dataset greatly improves performance on the task, showing the potential of the proposed paradigm.
Linköping: Linköping University Electronic Press, 2025. , p. 121