Our work began with only analysing the performance of 6DoF camera tracking in a known 3D rigid scene under varying camera frame-rate (or strictly exposure time in computational photography parlance). This was mostly driven by our intuitions that high frame-rate should be better because image motion between consecutive frames reduces considerably when frame-rate setting of the camera is turned up. Any tracking algorithm that is aimed towards real-time performance would obviously prefer the images where it has to do less work. Additionaly, since many direct tracking algorithms that work on linearising the cost function to obtain a convex approximation, the linearisations become increasingly more valid because of small motion assumption at high frame-rates. Till then, we only wanted to find out answers to the following very simple questions. However, when doing so, we quickly realised that there are few more parameters we can change that can affect the performance of tracker. These parameters are intertwined with frame-rate when it comes to performance evaluation. The first parameter that springs in mind is image-resolution. We, then decided to be more specific with our questions and changed them to Keeping that in mind, we then needed a framework where we can vary all the parameters continuously and compare the performance against a perfect ground-truth to judge which frame-rate is optimal? After debating about real and synthetic framework to use for experiments, we realised that collecting real image data is not very easy because With all these limitations, synthetic framework was the first obvious choice as it allows us to exercise full control of all the parameters we are interested in varying and figure out the answers to our questions (which we can verify in real experiments later on). Our main concern then was to make sure that the synthetic images look as realistic as possible. This is mainly adding realistic motion blur and camera noise in the images.

Remarks: Camera tracking forms the front end of many systems in vision and notably real-time SLAM. The applicability in many different domains is probably the raison d'être of camera tracking still as an active research problem and its widespread popularity. Therefore, it is imperative if we are using camera tracking, we know where it works best and where it breaks. The main goal of our research is to develop systems that can track super-fast motion and can be used to build models of the scene later as we throw the camera in.