Robot able to mimic an activity after observing it just one time

A team of researchers at UC Berkeley has found a way to get a robot to mimic an activity it sees on a video screen just a single time. In a paper they have uploaded to the arXiv preprint server, the team describes the approach they used and how it works.

Robots that learn to do things simply by watching a human carry out an action a single time would be capable of learning many more new actions much more quickly than is now possible. Scientists have been working hard to figure out how to make it happen.

Three steps for our meta-learning algorithm. Credit: Tianhe Yu and Chelsea Finn  

Historically though, robots have been programmed to perform actions like picking up an object by via code that expressly lays out what needs to be done and how. That is how most robots that do things like assemble cars in a factory work. Such robots must still undergo a training process by which they are led through procedures multiple times until they are able to do them without making mistakes. More recently, robots have been programmed to learn purely through observation—much like humans and other animals do. But such imitative learning typically requires thousands of observations. In this new effort, the researchers describe a technique they have developed that allows a robot to perform a desired action by watching a human being do it just a single time.

To accomplish this feat, the researchers combined imitation learning with a meta-learning algorithm. The result is something they call model-agnostic meta-learning (MAML). Meta-learning, the researchers explain, is a process by which a robot learns by incorporating prior experience. If a robot is shown video of a human picking up a pear or another similar object, for example, and putting it into a cup, bowl or other object, it can get a "feel" for an objective. If in each instance it is taught to imitate the behavior in a certain way, then it "learns" what to do when observing other similar behaviors. Thus, when it sees a video of a person picking up a plum and putting it into a bowl, it recognizes the behavior and is able to translate that into a similar behavior of its own, which it can then carry out.

For full references please use source link below.

Video can be accessed at source link below.

REGISTER NOW

By Bob Yirka / Freelance Journalist

Bob Yirka has always been fascinated by science and has spent large portions his life with his nose buried in textbooks or magazines; he has Bachelor of Science Degree in Computer Science and a Master of Science in Information Systems Management. He’s worked in a variety of positions in the telecommunications field ranging from help desk jockey to systems analyst to MIS manager. Recently, after nearly twenty years in the business, he’s decided to move to what he really loves doing and that is writing. In addition to writing for Science X, Bob has also sold several short-stories and has written three novels.

Muckrack

(Source: techxplore.com; July 4, 2018; https://tinyurl.com/y8ovvrus)
Back to INF

Loading please wait...