Overview on List of Publications

The UP-Drive project resulted in 45 scientific publications in journals, international conferences and workshops.

The blog entries below provide an overview of these publications including links to their open access versions.


 

V-shaped interval insensitive loss for ordinal classification

K. Antoniuk, V. Franc, V. Hlaváč

Machine Learning

We address a problem of learning ordinal classifiers from partially annotated examples. We introduce a V-shaped interval-insensitive loss function to measure discrepancy between predictions of an ordinal classifier and a partial annotation provided in the form of intervals of candidate labels. We show that under reasonable assumptions on the annotation process the Bayes risk of the ordinal classifier can be bounded by the expectation of an associated interval-insensitive loss. We propose several convex surrogates of the interval-insensitive loss which are used to formulate convex learning problems. We described a variant of the cutting plane method which can solve large instances of the learning problems. Experiments on a real-life application of human age estimation show that the ordinal classifier learned from cheap partially annotated examples can achieve accuracy matching the results of the so-far used supervised methods which require expensive precisely annotated examples.

Paper (.pdf)

Single Arm Robotic Garment Folding Path Generation

V. Petrík, V. Smutný, P. Krsek, V. Hlaváč

Advanced Robotics

We address the accurate single arm robotic garment folding. The folding capability is influenced mostly by the folding path which is performed by the robotic arm. This paper presents a new method for the folding path generation based on the static equilibrium of forces. The existing approach based on a similar principle confirmed to be accurate for one-dimensional strips only. We generalize the method to two-dimensional shapes by modeling the garment as an elastic shell. The path generated by our method prevents the garment from slipping while folding on a low friction surface. We demonstrate the accuracy of this approach by comparing our paths (a) with the existing method when one-dimensional strips of different materials were modeled, and (b) experimentally with real robotic folding.

Paper (.pdf)

Multi-view facial landmark detector learned by the Structured Output SVM

M. Uřičář, V. Franc, D. Thomas, A. Sugimoto, V. Hlaváč

Image and Vision Computing

We propose a real-time multi-view landmark detector based on Deformable Part Models (DPM). The detector is composed of a mixture of tree based DPMs, each component describing landmark configurations in a specific range of viewing angles. The usage of view specific DPMs allows to capture a large range of poses and to deal with the problem of self-occlusions. Parameters of the detector are learned from annotated examples by the Structured Output Support Vector Machines algorithm. The learning objective is directly related to the performance measure used for detector evaluation. The tree based DPM allows to find a globally optimal landmark configuration by the dynamic programming. We propose a coarse-to-fine search strategy which allows real-time processing by the dynamic programming also on high resolution images. Empirical evaluation on “in the wild” images shows that the proposed detector is competitive with the state-of-the-art methods in terms of speed and accuracy yet it keeps the guarantee of finding a globally optimal estimate in contrast to other methods.

Paper (.pdf)

Motion Prediction Influence on the Pedestrian Intention Estimation Near a Zebra Crossing

J. Škovierová, A. Vobecký, M. Uller, R. Škoviera, V. Hlaváč

4th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2018)

The reported work contributes to the self-driving car efforts, more specifically to scenario understanding from the ego-car point of view. We focus on estimating the intentions of pedestrians near a zebra crossing. First,we predict the future motion of detected pedestrians in a three seconds time horizon. Second, we estimate the intention of each pedestrian to cross the street using a Bayesian network. Results indicate, that the dependence between the error rate of motion prediction and the intention estimation is sub-linear. Thus, despite the lower performance of motion prediction for the time scope larger than one second, the intention estimation remains relatively stable

Paper (.pdf)

Robust Maximum-likelihood On-Line LiDAR-to-Camera Calibration Monitoring and Refinement

J. Moravec, R. Sara

Proceedings of the 23rd Computer Vision Winter Workshop, Cesky Krumlov, Czech Republic, pp. 27-35. February 5-7, 2018.

We present a novel method for online LiDAR–Camera system calibration tracking and refinement.  The method is correspondence-free, formulated as a maximum-likelihood learning task. It is based on a consistency of projected LiDAR point cloud corners and optical image edges. The likelihood function is robustified using a model in which the inlier /outlier label for the image edge pixel is marginalized out. The learning is performed by a stochastic on-line algorithm that includes a delayed learning mechanism improving its stability. Ground-truth experimental results are shown on KITTI sequences with known reference calibration. Assuming motion-compensated LiDAR data the method is able to track synthetic rotation calibration drift with about 0.06 degree accuracy in yaw and roll angles and 0.1 degree accuracy in the pitch angle. The basin of attraction of the optimization is about plus minus 1.2 degree.  The method is able to track rotation calibration parameter drift of 0.02 degree per measurement mini-batch.  Full convergence occurs after about 50 mini-batches.  We conclude the method is suitable for real-scene driving scenarios.

Paper (.pdf)