We present a method for optimization-based recovery of eye motion from rolling shutter video of the retina. Our approach formulates eye tracking as an optimization problem that jointly estimates the retina’s motion and appearance using convex optimization and a constrained version of gradient descent. By incorporating the rolling shutter imaging model into the formulation of our joint optimization, we achieve state-of-the-art accuracy both offline and in real-time. We apply our method to retina video captured with an adaptive optics scanning laser ophthalmoscope (AOSLO), demonstrating eye tracking at 1 kHz with accuracies below one arcminute — over an order of magnitude higher than conventional eye tracking systems.
@inproceedings{shenoy2021rslam,
title={R-SLAM: Optimizing Eye Tracking from Rolling Shutter Video of the Retina},
author={Jay Shenoy and James Fong and Jeffrey Tan and Austin Roorda and Ren Ng},
year={2021},
booktitle={ICCV}
}
This work was supported by a Hellman Fellowship, by the Air Force Office of Scientific Research under award number FA9550-20-1-0195, and by National Institutes of Health (NIH) grant R01EY023591.