Overview on List of Publications

The UP-Drive project resulted in 45 scientific publications in journals, international conferences and workshops.

The blog entries below provide an overview of these publications including links to their open access versions.


 

Motion Prediction Influence on the Pedestrian Intention Estimation Near a Zebra Crossing

J. Škovierová, A. Vobecký, M. Uller, R. Škoviera, V. Hlaváč

4th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2018)

The reported work contributes to the self-driving car efforts, more specifically to scenario understanding from the ego-car point of view. We focus on estimating the intentions of pedestrians near a zebra crossing. First,we predict the future motion of detected pedestrians in a three seconds time horizon. Second, we estimate the intention of each pedestrian to cross the street using a Bayesian network. Results indicate, that the dependence between the error rate of motion prediction and the intention estimation is sub-linear. Thus, despite the lower performance of motion prediction for the time scope larger than one second, the intention estimation remains relatively stable

Paper (.pdf)

Robust Maximum-likelihood On-Line LiDAR-to-Camera Calibration Monitoring and Refinement

J. Moravec, R. Sara

Proceedings of the 23rd Computer Vision Winter Workshop, Cesky Krumlov, Czech Republic, pp. 27-35. February 5-7, 2018.

We present a novel method for online LiDAR–Camera system calibration tracking and refinement.  The method is correspondence-free, formulated as a maximum-likelihood learning task. It is based on a consistency of projected LiDAR point cloud corners and optical image edges. The likelihood function is robustified using a model in which the inlier /outlier label for the image edge pixel is marginalized out. The learning is performed by a stochastic on-line algorithm that includes a delayed learning mechanism improving its stability. Ground-truth experimental results are shown on KITTI sequences with known reference calibration. Assuming motion-compensated LiDAR data the method is able to track synthetic rotation calibration drift with about 0.06 degree accuracy in yaw and roll angles and 0.1 degree accuracy in the pitch angle. The basin of attraction of the optimization is about plus minus 1.2 degree.  The method is able to track rotation calibration parameter drift of 0.02 degree per measurement mini-batch.  Full convergence occurs after about 50 mini-batches.  We conclude the method is suitable for real-scene driving scenarios.

Paper (.pdf)

Estimation of Absolute Scale in Monocular SLAM Using Synthetic Data

Danila Rukhovich, Daniel Mouritzen, Ralf Kaestner, Martin Rufli, Alexander Velizhev

International Conference on Computer Vision (ICCV) 2019 – Workshop on Computer Vision for Road Scene Understanding and Autonomous Driving

This paper addresses the problem of scale estimation in monocular SLAM by estimating absolute distances between camera centers of consecutive image frames. These estimates would improve the overall performance of classical(not deep) SLAM systems and allow metric feature locations to be recovered from a single monocular camera. We propose several network architectures that lead to an improvement of scale estimation accuracy over the state of the art. In addition, we exploit a possibility to train the neural network only with synthetic data derived from a computer graphics simulator. Our key insight is that, using only synthetic training inputs, we can achieve similar scale estimation accuracy as that obtained from real data. This fact indicates that fully annotated simulated data is a viable alternative to existing deep-learning-based SLAM systems trained on real (unlabeled) data. Our experiments with unsupervised domain adaptation also show that the difference in visual appearance between simulated and real data does not affect scale estimation results. Our method operates with low-resolution images (0.03 MP), which makes it practical for real-time SLAM applications with a monocular camera.

Paper (.pdf)

A Decentralized Trust-minimized Cloud Robotics Architecture

Alessandro Simovic, Ralf Kaestner and Martin Rufli

International Conference on Intelligent Robots and Systems (IROS) 2017 – Poster Track

We introduce a novel, decentralized architecture facilitating consensual, blockchain-secured computation and verification of data/knowledge. Through the integration of (i) a decentralized content-addressable storage system, (ii) a decentralized communication and time stamping server, and (iii) a decentralized computation module, it enables a scalable, transparent, and semantically interoperable cloud robotics ecosystem, capable of powering the emerging internet of robots.

Paper (.pdf)   Poster (.pdf)