LoFTR in Tensorflow: Detector-Free Local Feature Matching with Transformers Re-implemented into Tensorflow

Robot Perception (ROB-GY 6203) Fall 2022


Rohan Gangakhedkar1*, Fady Algyar1*, Miles Kilcourse1*, Suraj Reddy1*,

1New York Univeristy   
* denotes equal contribution

Abstract


TL;DR: LoFTR can extract high-quality semi-dense matches even in indistinctive regions with low-textures, motion blur, or repetitive patterns. Because of it's robustness and superiorirty over other neural network methods a re-implmentation into Tensorflow was achieved. This is to further expand the userbase and reach of the LoFTR algorithm.

LoFTR in Tensorflow Matches

This project proposed a reimplementation of the LoFTR Algorithm, the current state-of-the-art method for detecting feature matches between images using Deep Neural Networks. Originally published in PyTorch, a common research library, this project re implements the complex architecture of the model into TensorFlow, a framework most commonly used in industry. Although time contraints meant a full training could not be completed, the outcomes show that this implementation functions as expected and with a trianing sample more aligned with the original paper, the results would also align better.


Presentation video (5 min)



Acknowledgements


We would like to specially thank the teaching team for Robot Perception Fall 2022, without whom this project could not have been completed. We would like to also thank NYU for graciusly allowing us to utilise the HPC (High performance computing) for executing the training of our models.