Towards Plug-n-Play robot guidance: Advanced 3D estimation and pose estimation in Robotic applications

Thomas Sølund

Research output: Book/ReportPh.D. thesisResearch

2938 Downloads (Pure)

Abstract

Robots are a key technology in the quest for higher productivity in Denmark and Europe. Robots have existed in many years as a part of production lines where they have solved monotonous and repetitive task in mass production industries. Typical the programming of these robots are handled by engineers with special knowledge who have often raised the price for using robots to a given production task. If robots have to be applicable for small and medium sized enterprises where production task often changes and batch sizes are below 50 products it is necessary that the staff is capable of re-programming the robot by themselves.

During the last five years a number of collaborative robots are introduced on the marked e.g. Universal Robot, which enables a production worker to program the robot to solve simple tasks. With the collaborative robot the production worker is able to make the robot grind, mill, weld and move objects, which are physical located at the same positions. In order to place objects in the same position each time, custom-made mechanical fixtures and aligners are constructed to ensure that objects are not moving. It is expensive to design and build these fixtures and it is difficult to quickly change to a novel task. In some cases where objects are placed in bins and boxes it is not possible to position the objects in the same location each time.

To avoid designing expensive mechanical solutions and to be able to pick objects from boxes and bins, a sensor is necessary to guide the robot. Today, primarily 2D vision systems are applied in industrial robotics, which are in-flexible and hard to program for the production workers. Smart cameras, which are easier to re-configure and program to detect objects exist. However, computing the correct position such that a robot can move to this position is still a challenge which requires calibration processes. Moreover, the ability to make the solution robust such that it is running 24/7 in a production is demanding and requires the right skills. Basically, the vision part of a flexible automation solution is difficult to manage for a production worker while the robot motion programming is easily handled with the new collaborative robots. This thesis deals with robot vision technologies and how these are made easier for production workers program in order to get robots to recognize and compute the position of objects in the industry.

This thesis investigates and discusses methods to encapsulate a 2D vision system into a framework in order to make changes in production task easier. The framework is presented in [Contribution B] and [Contribution C] and demonstrates how re-configuration of vision systems is made easier but in the same time reviles some of the fundamental problems that exist by observing a tree dimensional world through a two dimensional vision system. This requires a calibration procedure every time in order to convert 2D to 3D, which still is a cumbersome process for a production worker.

For this reason, the rest of the thesis investigates and discusses how 3D computer vision techniques can ease the problem of recognizing and computing the position of objects. In [Contribution D] a small lightweight 3D sensor is presented. The 3D sensor has a size that makes it suitable for tool mounting at a collaborative robot. It is based on structured light principles and 3D estimation techniques, which enables fast and accurate acquisition of point clouds of low textured and reflective industrial objects.

In [Contribution E] a 3D vision system for easy learning of 3D models is presented. The system creates a 3D model of the object by scanning it from three views. Then the object acts as a reference model in the system when new instances of the object have to be located in the scene. With this approach fast re-configuration is possible. In [Contribution F] a new dataset for 3D object recognition and an evaluation of state-of-the-art local features for object recognition are presented. The contribution shows as expected that state-of-the-art 3D object recognition algorithms are not good enough to locate industrial objects with few local shape features on the surface.
Original languageEnglish
Place of PublicationKgs. Lyngby
PublisherTechnical University of Denmark
Number of pages191
Publication statusPublished - 2017
SeriesDTU Compute PHD-2016
Number424
ISSN0909-3192

Projects

Towards Plug-n-Play robot guidance: Advanced 3D sensors and pose estimation in Robotic applications

Sølund, T., Aanæs, H., Beck, A. B., Krüger, N., Carstensen, J. M., Gramkow, C. & Kämäräinen, J.

ErhvervsPhD-ordningen VTU

01/04/201212/12/2016

Project: PhD

Cite this

Sølund, T. (2017). Towards Plug-n-Play robot guidance: Advanced 3D estimation and pose estimation in Robotic applications. Technical University of Denmark. DTU Compute PHD-2016, No. 424