Abstract
In this article an approach to a mobile 3-D handheld scanner with additional sensory information is proposed. It fully automatically builds a multi-view 3-D scan. Conventionally complex post processing or expensive position trackers are used to realize such a process. Therefore a combination of a visual and inertial motion tracking system is developed to deal with the position tracking. Both sensors are integrated into the 3-D scanner and their data are fused for robustness during swift scanner movements and for long term stability. This article presents an overview over the system architecture, the navigation process, surface registration aspects, and measurement results.
| Original language | English |
|---|---|
| Pages (from-to) | 313-325 |
| Number of pages | 13 |
| Journal | International Journal of Optomechatronics |
| Volume | 8 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - 31 Oct 2014 |
Bibliographical note
Funding Information: This approach inspired the work reported in this article. The NavOScan project,[5] funded by the European Union’s Seventh Framework Programme, led to an add-on navigation unit for conventional, structured light projection based 3-D scanners. This includes fringe projection scanners that are able to provide millions of 3-D point measurements (at low measurement uncertainty) per single acquisition. By extending the 3-D scanning pipeline by automatic scan alignment based on robust sensor pose estimation and 3-D data matching, an easy to use 3-D scanning experience can be realized. Funding Information: The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 262516. Publisher Copyright: © 2014, Copyright Taylor & Francis Group, LLC.Other keywords
- 3-D scanning pipeline
- 3-D scanning system
- coarse registration
- multiple view registration
- visual feature tracking
- visual inertial navigation