Visual-Inertial Navigation: Challenges and Applications
IROS 2019 Full-day Workshop: November 8, 2019, Macau, China; Room: LG-R16 @ Venetian Macao Resort Hotel
Updates
(7/26) Paper limits do not include references (say n pages), i.e., 6+n for research papers, 4+n for field reports, and 2+n for demo papers.
Overview
As cameras and IMUs are becoming ubiquitous, visual-inertial navigation systems (VINS) that provide high-precision 3D motion estimation, hold great potentials in a wide range of applications from augmented reality (AR) and aerial navigation to autonomous driving, in part because of the complementary sensing capabilities and the decreasing costs and size of these sensors. While visual-inertial navigation, alongside with SLAM, has witnessed tremendous progress in the past decade, yet certain critical aspects in the design of visual-inertial systems remain poorly explored, greatly hindering the widespread deployment of these systems in practice. For example, many VINS algorithms are yet not robust to high dynamics and poor lighting conditions; they are yet not accurate enough for long-term, large-scale operations, in particular, in life-critical scenarios; and yet they are unable to provide semantic and cognitive understandings to support high-level decision making. This workshop brings together researchers in robotics, computer vision and AI, from both academia and industry, to share their insights and thoughts on the R&D of VINS. The goal of this workshop is to bring forward the latest breakthroughs and cutting-edge research on visual-inertial navigation and beyond, to open discussions about technical challenges and future research directions for the community, and to identify new applications of this emerging technology.
Call for Contributions
We welcome submissions of papers describing VINS-related work in progress, preliminary results, novel concepts, and industry experiences.
All submitted papers will be reviewed by at least two experts (see Program Committee below) on the basis of technical quality, relevance, significance, and clarity.
Topics of interest to this workshop include, but are not necessarily limited to:
Visual-inertial odmetry
Visual-inertial perception
Visual SLAM
Sensor calibration
High-speed visual control and estimation of aerial vehicles
Deep learning for visual SLAM
Cooperative visual-inertial navigation
Multi-sensor fusion
Co-design of hardware and software of VINS
Simulations and benchmarking of visual-inertial navigation
Visual perception in challenging and dynamic environments
Human motion modeling
Field robotics
AR/VR
We accept the following forms of contributions:
Research papers (up to 6 pages)
Field reports (2-4 pages)
Live demos of working systems (up to 2 pages)
All accepted papers will appear on the workshop website.
Note that the authors retain all the intellectual properties of their contributions to the workshop.
We will also be exploring the possibility of a journal special issue for the best contributions at the workshop.
Important Dates
August 1, 2019 –> August 15, 2019: Deadline of paper submission
September 1, 2019 –> September 8, 2019: Notification of acceptance
October 1, 2019: Submission of final version
November 8, 2019: Workshop at IROS 2019
Organizers
Invited Speakers (confirmed)
Program Committee
Kevin Eckenhoff, University of Delaware / Facebook
Chao Guo, Google
Shoudong Huang, University of Technology Sydney
Mingyang Li, Alibaba
Yasir Latif, University of Adelaide
Yong Liu, Zhejiang University
Agostino Martinelli, INRIA Rhone Alpes
Wisely Babu Benzun Pious, Bosch
Yue Wang, Zhejiang University
Poster Papers
Dengshen Chen, Yuanlong Yu, and Xiang Gao: Semi-Supervised Deep Learning Framework for Monocular Visual Odometry
Geoff Fink, and Claudio Semini: Proprioceptive Sensor Dataset for Quadrupeds
Wanlong Li, Yu Tang, Chao Ding, Xueshi Li, and Feng Wen: Visual-Inertial Ego-Motion Estimation using Rolling-Shutter Camera in Autonomous Driving
Jiajun Lyu, Jinhong Xu, Xingxing Zuo, and Yong Liu: An Efficient LiDAR-IMU Calibration Method Based on Continuous-Time Trajectory
Yongseok Lee, Hanbyeol Yoon, Jinuk Heo, WonHa Lee, and Dongjun Lee: Wearable Visual-Inertial Hand Tracking Interface Regardless of Environment and Occlusion
Patrick Geneva, Kevin Eckenhoff, Woosik Lee, Yulin Yang, and Guoquan Huang: OpenVINS: A Research Platform for Visual-Inertial Estimation
Ziqiang Wang, Chengcheng Guo, Lin Zhao, Mei Li, and Xinyu Qi: Direct Sparse Visual-Inertial Odometry with Stereo Cameras
He Zhang, Lingqiu Jin, and Cang Ye: A Depth-Enhanced Visual Inertial Odometry for a Robotic Navigation Aid for Blind People (LORD Best Paper)
Joshua Jaekel, and Michael Kaess: Robust Multi-Stereo Visual-Inertial Odometry
Program
9:00-9:05AM: Welcome and Introduction
9:05-9:45AM: Stergios Roumeliotis (UMN): A Short Tutorial on VINS (slides)
9:45-10:15AM: Davide Scaramuzza (Zurich): Visual Inertial SLAM: Current Status and the Road Ahead (slides)
10:15-10:45AM: Laurent Kneip (Shanghai Tech): Dimensionality reduction in visual-inertial SLAM (slides)
10:45-11:15AM: Maurice Fallon (Oxford): VILENS - the Challenge of Visual Navigation on Quadruped Robots (slides)
11:15-11:45AM: COFFEE BREAK
11:45-12:15PM: Luca Carlone (MIT): Chasing a Chimera: from VIN to real-time high-level understanding (slides)
12:15-1:00PM: Poster Spotlight (5 minutes each paper)
1:00-2:00PM: LUNCH; Poster Setup
2:00-2:30PM: Guofeng Zhang (ZJU): Robust VI-SLAM and HD-Map Reconstruction for Location-based Augmented Reality (slides)
2:30-3:00PM: Giuseppe Loianno (NYU): Challenges and Opportunities for Visual Inertial Navigation of Aerial Robots (slides)
3:00-3:30PM: Paloma Sodhi / Michael Kaess (CMU): Robust Multi-Stereo Visual-Inertial Odometry (slides)
3:45-4:15PM: COFFEE BREAK
4:15-4:45PM: Ross Hartley (Amazon): Contact-aided Invariant Extended Kalman Filtering for Legged Robot State Estimation (slides)
4:45-5:30PM: Poster Session
5:30-5:45PM: Concluding Remarks (incl. LORD best paper award annoucement)
|