Institute of Assembly Technology and Robotics Research Publications
Autonomous Sensing and Localization of a Mobile Robot for Mulit-step Additive Manufacturing in Construction

Autonomous Sensing and Localization of a Mobile Robot for Mulit-step Additive Manufacturing in Construction

Categories Konferenz (reviewed)
Year 2022
Authors Lachmayer, L.; Recker, T.; Dielemans, G.; Dörfler, K.; Raatz, A.
Published in Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B1-2022, pp. 453–458
Description

In contrast to stationary systems, mobile robots have an arbitrarily expandable workspace. As a result, the spatial dimensioning of the task to be mastered plays only a subordinate role and can be scaled as desired. For the construction industry in particular, which requires the handling and production of substantial components, mobile robots mean an unlimited expansion of the workspace based on their mobility levels and thus increased flexibility. The greatest challenge in mobile robotics lies with the discrepancy between the precision required for the task and the achievable positioning accuracy. External localization systems show significant potential for improvement in this respect but, in many cases, require a line of sight between the measurement system and the robot or a time-consuming calibration of markers. Therefore, this article presents an approach for an onboard localization system for use in a multi-step additive manufacturing processes for building construction. While a SLAM algorithm is used for the initial estimation of the robot's base at the work site, in a refined estimation step, the positioning accuracy is enhanced using a 2D-laser-scanner. This 2D-scanner is used to create a 3D point cloud of the 3D-printed component each time after a print job of one segment has been carried out and before continuing a print job from a new location, to enable printing of layers on top of each other with sufficient accuracy over many repositioning manouvres. When the robot returns to a position for print continuation, the initial and the new point clouds are compared using an ICP-algorithm, and the resulting transformation is used to refine the robot's pose estimation relative to the 3D-printed building component. While initial experiments demonstrate the approach's potential, transferring it to large-scale 3D-printed components presents additional challenges, highlighted in this paper.

DOI 10.5194/isprs-archives-XLIII-B1-2022-453-2022