Monday , January 25 2021

Giving robots a better sense of object manipulation



The model improves the ability of the robot to mold materials into shapes and communicate with fluids and solid objects.

Watch a video

New

The new particle simulator developed by MIT researchers improves the ability of robots to shape materials in simulated forms of target groups and to communicate with solid objects and liquids. This can give robots a refined touch for industrial applications or for personal robotics – such as shaping clay or rolling sticky sushi rice.

Thanks to the researchers

The new learning system developed by MIT researchers improves the abilities of robots to shape materials in targeted forms and make predictions for interaction with solid objects and liquids. The system, known as a learning-based simulator, can give refined touch to industrial robots – and there may be fun applications in personal robotics, such as modeling clay forms or rolling sticky rice for sushi.

In robot planning, physical simulators are models that capture how many different materials respond to force. Robots are "trained" using models to predict the results of their interactions with objects, such as pushing a solid box or deforming clay. But traditional learning-based simulators mainly focus on rigid items and are not able to cope with liquids or softer objects. Some streamlined physics-based simulators can handle a variety of materials, but they rely heavily on approximation techniques that make mistakes when robots communicate with objects in the real world.

In a paper presented at the International Conference on Learning in May, researchers describe a new model that teaches how to capture how small pieces of different materials – "particles" – communicate when suppressed and stimulated. The model directly learns from the data in cases where the basic physics of the movements is uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as hard and deformed materials, will respond to the strength of its touch. Since the robot handles objects, the model also helps to further improve robot control.

In experiments, a two-pointed robotic arm, called "RiceGrip," precisely shaped the deformable foam to the desired configuration – such as the "T" form – which serves as a sushi-rice proxy. In short, the researcher model serves as a model of "intuitive physics" that robots can use to reconstruct three-dimensional objects, something similar to what people do.

"People have an intuitive physical model in our heads, where we can imagine how an object will behave if we press it or squeeze it. Based on this intuitive model, people can achieve incredible manipulation tasks that are far beyond the current ones robots, "says the first author, Yunsu Lee, a graduate student at the Computer Science and Artificial Intelligence Laboratory (CSAIL). "We want to build this type of intuitive robot model to enable them to do what people can do."

"When the children are 5 months old, they already have different expectations for solids and fluids," adds co-author Jiua Wu, a graduate student at CSAIL. "It's something we know at an early age, so maybe that's something we should try to model it for robots."

The engagement of Li and Wu in the paper are: Rus Tetraq, a CSAIL researcher and professor in the Department of Electrical and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences; and Antonio Torralba, a professor of EECS and director of MIT-IBM Watson AI Lab.

Dynamic graphics

The key innovation behind the model, called the "DPI-Nets," creates dynamic interaction graphics that comprise thousands of nodes and edges that can capture the complex behavior of the so-called particles. In the graphs, each node is a particle. Neighboring nodes are connected with each other using the pointed edges, which represent the interaction that passes from one particle to another. In the simulator, particles are hundreds of small waves combined to make a liquid or deformable object.

The graphics are constructed as the basis for a machine learning system called the graphical nerve network. In training, the model over time learns how particles in different materials react and restructure. It does this by implicitly calculating the different properties of each particle – such as its mass and elasticity – to predict if and where the particle will move in the graph when it is disturbed.

The model then uses a "propagation" technique that currently spreads the signal across the graph. The researchers adjusted the technique for each type of material – solid, deformed, and liquid – to record a signal that predicted the positions of the particles in certain individual steps of time. At each step, if necessary, it moves and re-connects the particles.

For example, if a solid box is pressed, disturbed particles will be moved forward. Since all the particles inside the box are firmly connected with each other, each other particle in the object moves the same calculated distance, rotation and any other dimension. Particle connections remain untouched and the box moves as one unit. But if the surface of deformed foam is drawn, the effect will be different. The disturbed particles move a lot forward, the surrounding particles moving forward only slightly, and the particles are no longer moving at all. With the liquid in the cup, the particles can fully jump from one end of the graph to the other. The graph must learn to predict where and how much all the affected particles are moving, which is computer complex.

Shaping and adjusting

In his paper, the researchers demonstrated a robot model with two fingers RiceGrip with clamping target shapes from deformable foam. The robot first uses a depth detection camera and object recognition techniques to identify the foam. The researchers randomly select particle shapes to initialize the position of the particles. Then, the model adds particle borders and reconstructs the foam in a dynamic graphic adapted for deformable materials.

Because of the simulated simulations, the robot already has a good idea of ​​how each touch, given a certain amount of force, will affect each of the particles in the graph. While the robot begins to distract the foam, iteratively coincides with the actual position of the particles in the directed position of the particles. Whenever the particles do not match, it sends a pattern error pattern. This signal changes the model to better correspond to physics than the real world of the material.

Next, researchers aim to improve the model to help robots better anticipate interactions with partially noticeable scenarios, such as knowing how a pile of boxes will move when they are imposed, even if only boxes of the surface and most of the other boxes hidden.

The researchers also explore ways to combine the model with an end-to-end perception module through direct image operation. This will be a joint project with Dan Jamin's group; Jamin recently completed a post at MIT and is now an assistant professor at Stanford University. "You deal with these cases all the time when there is only partial information," says Wu. "We are expanding our model to find out the dynamics of all particles, and only see a small fraction."

/ Exemption from university. See in full here.


Source link