Hide menu

Self-learning robots mean flexible manufacturing

Industrial robots with recognition capability would be an economic asset for industry and national economy.

By patterning the solution on the human brain, LiU communications engineers are underway with an innovative robotics system equipped with computer vision and capable of self-learning.

It may seem strange that a computer is able to calculate enormous sums in the fraction of a second, but not be able to distinguish between a dog and a cat? Binary coding, based on "yes" or "no" seems not to be the best solution for machine visualization tasks.

A case at hand is industrial robots whose forte to date has been brawn not brains. Contemporary systems function well for specific recognition tasks. But much human brainpower is required to design algorithms and define data that will enable recognition. Moreover, the task must be completely redone each time new objects are introduced into the production process. A robot capable of self-learning new shapes, one which is able to function by exploring its surroundings, would be a truly valuable tool.

"It would benefit the entire community if we could develop flexible, self-adapting systems," explains Erik Jonsson, doctoral student in image processing, who will soon present his thesis on channel coding—a method that mimics the process in which nerve cells in the human brain are activated by visual impressions.

To enable a computer to decipher an object, it is common to designate clearly defined properties such as color or the direction of movement. The systems currently under development at LiU allow the robot to rotate an object, examine it from several angles and form an impression. View-based object recognition works roughly the same way a human child learns to interpret objects.

Channel-coded feature mapping does not describe an object through specific predetermined angles of inspection. Instead it maps a seamless transition from one perspective to another. Each channel identifies, for instance, the presence of a color and stakes out the relevant positioning data. These measured parameters in toto describe the scene which the robot "sees" with the help of its video camera, its machine vision.

The basic research concept is the development of computer vision software which can be installed in an arbitrary robot, in other words, self-learning capability is a must.

All the methods presented in the thesis have been experimentally tested. This research work has been part of a greater scientific scope; it is included in the EU project COSPAL, which is coordinated by Associate Professor Michael Felsberg at the Department of Electrical Engineering, Computer Vision Laboratory (CVL).


2008-03-20




Page manager: therese.winder@liu.se
Last updated: 2009-06-03