An Adaptable Robot Vision System Performing Manipulation Actions With Flexible Objects

Leon Bodenhagen, Andreas R. Fugl, Andreas Jordt, Morten Willatzen, Knud A Andersen, Martin M. Olsen, Reinhard Koch, Henrik G. Petersen, Norbert Kruger

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

This paper describes an adaptable system which is able to perform manipulation operations (such as Peg-in-Hole or Laying-Down actions) with flexible objects. As such objects easily change their shape significantly during the execution of an action, traditional strategies, e. g., for solve path-planning problems, are often not applicable. It is therefore required to integrate visual tracking and shape reconstruction with a physical modeling of the materials and their deformations as well as action learning techniques. All these different submodules have been integrated into a demonstration platform, operating in real-time. Simulations have been used to bootstrap the learning of optimal actions, which are subsequently improved through real-world executions. To achieve reproducible results, we demonstrate this for casted silicone test objects of regular shape. Note to Practitioners-The aim of this work was to facilitate the setup of robot-based automation of delicate handling of flexible objects consisting of a uniform material. As examples, we have considered how to optimally maneuver flexible objects through a hole without colliding and how to place flexible objects on a flat surface with minimal introduction of internal stresses in the object. Given the material properties of the object, we have demonstrated in these two applications how the system can be programmed with minimal requirements of human intervention. Rather than being an integrated system with the drawbacks in terms of lacking flexibility, our system should be viewed as a library of new technologies that have been proven to work in close to industrial conditions. As a rather basic, but necessary part, we provide a technology for determining the shape of the object when passing on, e. g., a conveyor belt prior to being handled. The main technologies applicable for the manipulated objects are: A method for real-time tracking of the flexible objects during manipulation, a method formodel-based offline prediction of the static deformation of grasped, flexible objects and, finally, a method for optimizing specific tasks based on both simulated and real-world executions.
Original languageEnglish
JournalI E E E Transactions on Automation Science and Engineering
Volume11
Issue number3
Pages (from-to)749-765
ISSN1545-5955
DOIs
Publication statusPublished - 2014

Keywords

  • AUTOMATION
  • DEFORMABLE OBJECTS
  • MODEL
  • GENERATION
  • TRACKING
  • Action learning
  • deformation modeling
  • flexible objects
  • shape tracking
  • 3D-modeling
  • control engineering computing
  • deformation
  • flexible manipulators
  • image reconstruction
  • learning (artificial intelligence)
  • object tracking
  • robot vision
  • shape recognition
  • Robotics and Control Systems
  • action execution
  • action learning techniques
  • adaptable robot vision system
  • bootstrap learning
  • casted silicone test objects
  • Deformable models
  • demonstration platform
  • flexible object handling
  • flexible object maneuver
  • internal stresses
  • laying-down actions
  • manipulation actions
  • manipulation operations
  • material properties
  • Mathematical model
  • model-based offline prediction
  • object shape
  • optimal actions
  • peg-in-hole actions
  • physical modeling
  • real-time tracking
  • real-world executions
  • robot-based automation
  • Robots
  • Shape
  • shape reconstruction
  • Splines (mathematics)
  • static deformation
  • Surface reconstruction
  • Surface topography
  • visual tracking
  • ROBOT vision

Fingerprint Dive into the research topics of 'An Adaptable Robot Vision System Performing Manipulation Actions With Flexible Objects'. Together they form a unique fingerprint.

Cite this