A Decentralized Trust-minimized Cloud Robotics Architecture

Alessandro Simovic, Ralf Kaestner and Martin Rufli

International Conference on Intelligent Robots and Systems (IROS) 2017 – Poster Track

We introduce a novel, decentralized architecture facilitating consensual, blockchain-secured computation and verification of data/knowledge. Through the integration of (i) a decentralized content-addressable storage system, (ii) a decentralized communication and time stamping server, and (iii) a decentralized computation module, it enables a scalable, transparent, and semantically interoperable cloud robotics ecosystem, capable of powering the emerging internet of robots.

Paper (.pdf)   Poster (.pdf)

Map Management for Efficient Long-Term Visual Localization in Outdoor Environments

Mathias Buerki, Marcyn Dymczyk, Igor Gilitschenski, Cesar Cadena, Roland Siegwart, and Juan Nieto

IEEE Intelligent Vehicles Symposium (IV) 2018

We present a complete map management process for a visual localization system designed for multi-vehicle long-term operations in resource constrained outdoor environments. Outdoor visual localization generates large amounts of data that need to be incorporated into a lifelong visual map in order to allow localization at all times and under all appearance conditions. Processing these large quantities of data is nontrivial, as it is subject to limited computational and storage capabilities both on the vehicle and on the mapping back-end. We address this problem with a two-fold map update paradigm capable of, either, adding new visual cues to the map, or updating co-observation statistics. The former, in combination with offline map summarization techniques, allows enhancing the appearance coverage of the lifelong map while keeping the map size limited. On the other hand, the latter is able to significantly boost the appearance-based landmark selection for efficient online localization without incurring any additional computational or storage burden. Our evaluation in challenging outdoor conditions shows that our proposed map management process allows building and maintaining maps for precise visual localization over long time spans in a tractable and scalable fashion

pdf   video

@inproceedings{BuerkiIV2018,
Title = {Map Management for Efficient Long-Term Visual Localization in Outdoor Environments},
Author = {M. Buerki and M. Dymczyk and I. Gilitschenski and C. Cadena and R. Siegwart and J. Nieto},
Fullauthor = {Mathias Buerki and Marcyn Dymczyk and Igor Gilitschenski and Cesar Cadena and Roland Siegwart and Juan Nieto},
Booktitle = {{IEEE} Intelligent Vehicles Symposium ({IV})},
Month = {June},
Year = {2018},
}

Design of an autonomous racecar: Perception, state estimation and system integration

Miguel Valls, Hubertus Hendrikx, Victor Reijgwart, Fabio Meier, Inkyu Sa, Renaud Dube, Abel Gawel, Mathias Bürki and Roland Siegwart

IEEE International Conference on Robotics and Automation (ICRA) 2018

This paper introduces fluela driverless: the first autonomous racecar to win a Formula Student Driverless competition. In this competition, among other challenges, an autonomous racecar is tasked to complete 10 laps of a previously unknown racetrack as fast as possible and using only onboard sensing and computing. The key components of fluela’s design are its modular redundant sub–systems that allow
robust performance despite challenging perceptual conditions or partial system failures. The paper presents the integration of key components of our autonomous racecar, i.e., system design, EKF–based state estimation, LiDAR–based perception, and particle filter-based SLAM. We perform an extensive
experimental evaluation on real–world data, demonstrating the system’s effectiveness by outperforming the next–best ranking team by almost half the time required to finish a lap. The autonomous racecar reaches lateral and longitudinal accelerations comparable to those achieved by experienced human drivers.

pdf    video


@inproceedings{valls2018design,
  title={Design of an autonomous racecar: Perception, state estimation and system integration},
  author={Valls, Miguel I and Hendrikx, Hubertus FC and Reijgwart, Victor JF and Meier, Fabio V and Sa, Inkyu and Dub{\'e}, Renaud and Gawel, Abel and B{\"u}rki, Mathias and Siegwart, Roland},
  booktitle={2018 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={2048--2055},
  year={2018},
  organization={IEEE}
}

Traffic Scene Segmentation based on Boosting over Multimodal Low, Intermediate and High Order Multi-range Channel Features

Arthur D. Costea and Sergiu Nedevschi

Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, June 11-14, 2017, pp. 74-81

In this paper we introduce a novel multimodal boosting based solution for semantic segmentation of traffic scenarios. Local structure and context are captured from both monocular color and depth modalities in the form of image channels. We define multiple channel types at three different levels: low, intermediate and high order channels. The low order channels are computed using a multimodal multiresolution filtering scheme and capture structure and color information from lower receptive fields. For the intermediate order channels, we employ deep convolutional channels that are able to capture more complex structures, having a larger receptive field. The high order channels are scale invariant channels that consist of spatial, geometric and semantic channels. These channels are enhanced by additional pyramidal context channels, capturing context at multiple levels. The semantic segmentation is achieved by a boosting based classification scheme over superpixels using multi-range channel features and pyramidal context features. A presegmentation is used to generate semantic channels as input for more powerful final segmentation. The final segmentation is refined using a superpixel-level dense CRF. The proposed solution is evaluated on the Cityscapes segmentation benchmark and achieves competitive results at low computational costs. It is the first boosting based solution that is able to keep up with the performance of deep learning based approaches.

pdf

Appearance-Based Landmark Selection for Efficient Long-Term Visual Localization

Mathias Buerki, Igor Gilitschenski, Elena Stumm, Roland Siegwart, and Juan Nieto

International Conference on Intelligent Robots and Systems (IROS) 2016

landmark_selectionWe present an online landmark selection method for efficient and accurate visual localization under changing appearance conditions. The wide range of conditions encountered during long-term visual localization by e.g. fleets of autonomous vehicles offers the potential exploit redundancy and reduce data usage by selecting only those visual cues which are relevant at the given time. Therefore co-observability statistics guide landmark ranking and selection, significantly reducing the amount of information used for localization while maintaining or even improving accuracy.

pdf   video

@inproceedings{BuerkiIROS2016,
Title = {Appearance-Based Landmark Selection for Efficient Long-Term Visual Localization},
Author = {M. Buerki and I. Gilitschenski and E. Stumm and R. Siegwart and J. Nieto},
Fullauthor = {Mathias Buerki and Igor Gilitschenski and Elena Stumm and Roland Siegwart and Juan Nieto},
Booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems ({IROS})},
Address = {Daejeon, Korea},
Month = {October},
Year = {2016},
}