TR2024-137
Insert-One: One-Shot Robust Visual-Force Servoing for Novel Object Insertion with 6-DoF Tracking
-
- "Insert-One: One-Shot Robust Visual-Force Servoing for Novel Object Insertion with 6-DoF Tracking", 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), October 2024.BibTeX TR2024-137 PDF
- @inproceedings{Chang2024oct,
- author = {Chang, Haonan and Boularias, Abdeslam and Jain, Siddarth}},
- title = {Insert-One: One-Shot Robust Visual-Force Servoing for Novel Object Insertion with 6-DoF Tracking},
- booktitle = {2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024)},
- year = 2024,
- month = oct,
- url = {https://www.merl.com/publications/TR2024-137}
- }
,
- "Insert-One: One-Shot Robust Visual-Force Servoing for Novel Object Insertion with 6-DoF Tracking", 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), October 2024.
-
MERL Contact:
-
Research Areas:
Abstract:
Recent advancements in autonomous robotic assembly have shown promising results, especially in addressing the precision insertion challenge. However, achieving adaptability across diverse object categories and tasks often necessitates a learning phase that requires costly real-world data collection. Moreover, previous research often assumes either the rigid attachment of the inserted object to the robot’s end-effector or relies on precise calibration within structured environments. We propose a one-shot method for high-precision contact-rich manipulation assembly tasks, enabling a robot to perform insertions of new objects from randomly presented orientations using just a single demonstration image. Our method incorporates a hybrid framework that blends 6-DoF visual tracking- based iterative control and impedance control, facilitating high- precision tasks with real-time visual feedback. Importantly, our approach requires no pre-training and demonstrates resilience against uncertainties arising from camera pose calibration errors and disturbances in the object in-hand pose. We validate the effectiveness of the proposed framework through extensive experiments in real-world scenarios, encompassing various high-precision assembly tasks.