Research on key technologies for lightweight maintenance operations of the remote handling system for a fusion reactor
Yin, Ruochen (2023-11-17)
Väitöskirja
Yin, Ruochen
17.11.2023
Lappeenranta-Lahti University of Technology LUT
Acta Universitatis Lappeenrantaensis
School of Energy Systems
School of Energy Systems, Konetekniikka
Kaikki oikeudet pidätetään.
Julkaisun pysyvä osoite on
https://urn.fi/URN:ISBN:978-952-412-007-4
https://urn.fi/URN:ISBN:978-952-412-007-4
Tiivistelmä
During the operation of the China Fusion Experimental Reactor (CFETR), the internal components of the fusion reactor must withstand various extreme environment conditions and are susceptible to damage, necessitating regular maintenance. The risky and challenge environment within the vacuum vessel of the fusion reactor necessitates that this maintenance be performed remotely. Currently, the conventional maintenance approach for fusion devices in operation worldwide depend on inefficient remote operation with a human operator. Therefore, an automated maintenance strategy is preferable for future fusion experimental reactors.
In routine maintenance and updates, remote handling systems are tasked with manipulating both heavy and lightweight objects. Common operations include component grasping, bore assembly and transfer, which have been the focus of specific research on robotic grasping, peg-in-hole assembly, and robotic pick-and-place. Despite extensive research in these areas, the research has primarily been conducted in ordinary daily environments. The unique conditions within the vacuum vessel of a fusion reactor, such as the absence of distinct target features, limited accuracy of sensor data, and stringent reliability requirements have not been adequately considered. As a result, the 3D data-based deep learning algorithms developed for these tasks cannot be directly applied to autonomous remote maintenance within the vacuum vessel and require refinement. Therefore, the following points of research are carried out in this paper:
(1) In the domain of robot grasping research, the data accuracy of depth cameras is severely limited due to the smooth metallic surfaces present within vacuum vessels. This results in the suboptimal performance of current mainstream end-to-end algorithms. To address this challenge, this paper proposes a novel robot grasping approach that combines deep learning with traditional algorithms. The proposed method first performs preliminary segmentation of 3D data using deep learning, followed by optimization of the segmentation results using traditional computer vision algorithms such as clustering and geometric feature extraction. This approach circumvents the dissemination of uncertainty in observed data, effectively overcoming systematic errors in 3D sensor data specific to the fusion reactor environment. The proposed approach enables highly reliable robotic arm grasping operations within the vacuum vessel, demonstrating excellent performance.
(2) In the realm of high-precision peg-in-hole assembly, the assembly task immediately following the grasping task poses several challenges. These include the non-rigid connection between the gripper and the part, the random relative position of the parts, and the robot arm’s repeatability not being able to meet the assembly requirements. Additionally, the assembly position is distributed across the inner wall of the vacuum vessel, rendering existing methods inadequate for this task. To address this issue, this study proposes a novel unsupervised learning algorithm that fuses monocular camera and force sensor data to solve the peg-in-hole assembly problem within the vacuum vessel. A combination of vision and force sensors enables the robotic arm to perform the assembly task in a manner similar to human hand-eye coordination, making the algorithm insensitive to the assembly environment and the initial position between the peg and hole, resulting in good generality. Moreover, a Three-layer Cushioning Structure (TCS) is proposed at the control level to avoid violent contact and collision during the robot arm’s exploration process, facilitating high-precision peg-in-hole assembly in the fusion reactor environment.
(3) In the field of automated pick-and-place studies, similar to the challenges mentioned in the first point, reliable 3D observation data cannot be obtained in the vacuum vessel environment using depth cameras. As a result, current mainstream algorithms that heavily rely on 3D data face significant limitations. To overcome these, this study proposes a novel monocular camera-based depth reinforcement learning algorithm to solve the automated pick-and-place problem. This algorithm takes advantage of the fact that monocular cameras can function effectively in the fusion reactor environment, and temporal difference optimizing by the reinforcement learning. In this study, 3D spatial information is recovered using 2D images combined with reinforcement learning. To avoid potential risks and address the time-consuming nature of reinforcement learning in the training process, we establish a virtual training environment with multiple sub-environments. This approach allows us to successfully complete the automated part pick-and-place study in the challenging fusion reactor environment.
In this thesis, the research is carried out for the lightweight remote handling autonomous maintenance tasks in the vacuum vessel, and the related research tasks are well accomplished by combining the characteristics of the vacuum vessel environment with the improvement of existing methods. Moreover, this study can provide some ideas for other more remote handling autonomous maintenance tasks research in the fusion reactor environment.
In routine maintenance and updates, remote handling systems are tasked with manipulating both heavy and lightweight objects. Common operations include component grasping, bore assembly and transfer, which have been the focus of specific research on robotic grasping, peg-in-hole assembly, and robotic pick-and-place. Despite extensive research in these areas, the research has primarily been conducted in ordinary daily environments. The unique conditions within the vacuum vessel of a fusion reactor, such as the absence of distinct target features, limited accuracy of sensor data, and stringent reliability requirements have not been adequately considered. As a result, the 3D data-based deep learning algorithms developed for these tasks cannot be directly applied to autonomous remote maintenance within the vacuum vessel and require refinement. Therefore, the following points of research are carried out in this paper:
(1) In the domain of robot grasping research, the data accuracy of depth cameras is severely limited due to the smooth metallic surfaces present within vacuum vessels. This results in the suboptimal performance of current mainstream end-to-end algorithms. To address this challenge, this paper proposes a novel robot grasping approach that combines deep learning with traditional algorithms. The proposed method first performs preliminary segmentation of 3D data using deep learning, followed by optimization of the segmentation results using traditional computer vision algorithms such as clustering and geometric feature extraction. This approach circumvents the dissemination of uncertainty in observed data, effectively overcoming systematic errors in 3D sensor data specific to the fusion reactor environment. The proposed approach enables highly reliable robotic arm grasping operations within the vacuum vessel, demonstrating excellent performance.
(2) In the realm of high-precision peg-in-hole assembly, the assembly task immediately following the grasping task poses several challenges. These include the non-rigid connection between the gripper and the part, the random relative position of the parts, and the robot arm’s repeatability not being able to meet the assembly requirements. Additionally, the assembly position is distributed across the inner wall of the vacuum vessel, rendering existing methods inadequate for this task. To address this issue, this study proposes a novel unsupervised learning algorithm that fuses monocular camera and force sensor data to solve the peg-in-hole assembly problem within the vacuum vessel. A combination of vision and force sensors enables the robotic arm to perform the assembly task in a manner similar to human hand-eye coordination, making the algorithm insensitive to the assembly environment and the initial position between the peg and hole, resulting in good generality. Moreover, a Three-layer Cushioning Structure (TCS) is proposed at the control level to avoid violent contact and collision during the robot arm’s exploration process, facilitating high-precision peg-in-hole assembly in the fusion reactor environment.
(3) In the field of automated pick-and-place studies, similar to the challenges mentioned in the first point, reliable 3D observation data cannot be obtained in the vacuum vessel environment using depth cameras. As a result, current mainstream algorithms that heavily rely on 3D data face significant limitations. To overcome these, this study proposes a novel monocular camera-based depth reinforcement learning algorithm to solve the automated pick-and-place problem. This algorithm takes advantage of the fact that monocular cameras can function effectively in the fusion reactor environment, and temporal difference optimizing by the reinforcement learning. In this study, 3D spatial information is recovered using 2D images combined with reinforcement learning. To avoid potential risks and address the time-consuming nature of reinforcement learning in the training process, we establish a virtual training environment with multiple sub-environments. This approach allows us to successfully complete the automated part pick-and-place study in the challenging fusion reactor environment.
In this thesis, the research is carried out for the lightweight remote handling autonomous maintenance tasks in the vacuum vessel, and the related research tasks are well accomplished by combining the characteristics of the vacuum vessel environment with the improvement of existing methods. Moreover, this study can provide some ideas for other more remote handling autonomous maintenance tasks research in the fusion reactor environment.
Kokoelmat
- Väitöskirjat [1027]