J. Moravec, R. Sara
Proceedings of the 23rd Computer Vision Winter Workshop, Cesky Krumlov, Czech Republic, pp. 27-35. February 5-7, 2018.
We present a novel method for online LiDAR–Camera system calibration tracking and refinement. The method is correspondence-free, formulated as a maximum-likelihood learning task. It is based on a consistency of projected LiDAR point cloud corners and optical image edges. The likelihood function is robustified using a model in which the inlier /outlier label for the image edge pixel is marginalized out. The learning is performed by a stochastic on-line algorithm that includes a delayed learning mechanism improving its stability. Ground-truth experimental results are shown on KITTI sequences with known reference calibration. Assuming motion-compensated LiDAR data the method is able to track synthetic rotation calibration drift with about 0.06 degree accuracy in yaw and roll angles and 0.1 degree accuracy in the pitch angle. The basin of attraction of the optimization is about plus minus 1.2 degree. The method is able to track rotation calibration parameter drift of 0.02 degree per measurement mini-batch. Full convergence occurs after about 50 mini-batches. We conclude the method is suitable for real-scene driving scenarios.