Google introduces two new papers featuring our latest artificial intelligence (AI) advances in robot dexterity research: Two of the most notable sub-programs are ALOHA Unleashed which teaches robots the principles of the complex and unknown two armed manipulation tasks, and DemoStart which developed methods are used on simulations to enhance performance in a multi-fingered robotic hand in real life.
By assisting robot dexterity in this way, such as teaching robots how to learn from human demonstrations, as well as translating images into corresponding actions, these systems are making way for a range of robots to take on all sorts of beneficial roles.
Google’s latest advances in robot dexterity
- ALOHA Unleashed
- DemoStart
Enhancing the Imitation Learning Using Two Robot Dexterity Arms
Previously, complex AI robots could not lift and place objects using a single arm only. This new paper introduces ALOHA Unleashed and examines how it executes a high degree of bi-arm dexterity. Thus, our robot manages to tie a shoelace, hang a shirt, fix another robot, set a gear, and even wash a kitchen.
The ALOHA Unleashed method expands our prior work, the ALOHA 2 system that was based on the popular ALOHA a low-cost open-source hardware platform for bimanual teleoperation from Stanford University.
ALOHA 2 is notably more manipulable than previous systems because it has two manipulators that can be intuitively telecontrolled for teaching and data-gathering, and it enables robots to glean how to achieve novel tasks from a limited number of demonstrations.
Imitation of robotic behaviors based on a couple of simulated demonstrations
Like with any robotic application, controlling a dexterous robotic hand is not a trivial matter, and it doubles for every finger, joint, and sensor. In another one of the new papers, we demonstrate DemoStart which employs the reinforcement learning algorithm to enable robots to learn dexterous skills in simulation.
DemoStart acquires knowledge from simple states and exploits knowledge from hard states until it completely optimizes to the best of its ability. To learn how to solve a task in simulation it is a hundred times less necessary to perform simulated demonstrations compared to when learning with examples from the real world for a similar purpose.
Based on MuJoCo our open-source physics simulator, we developed DemoStart. Our approach was able to transfer to the physical environment almost zero-shot after generalizing a suite of tasks in simulation and employing the standard techniques of bringing down the sim-to-real gap such as domain randomization.
Simulation learning by use of robots may take a shorter period and be less costly as compared to real physical experimentation. Moreover, as DemoStart uses progressive learning combined with reinforcement learning and learning from a few demonstrations, it automatically generates a curriculum, which simplifies the sim-to-real knowledge transfer, meaning that less physical testing is needed at the cost of increased time and effort, thus lessening the need for physical experiments.
To carry out more complex learning with much learning experience and experimentation, we implemented this new approach in a three-fingered robotic hand known as DEX-EE that was designed and built in collaboration with Shadow Robot.
The future of the robot dexterity
robot dexterity is one of the special branches of AI research that illustrate how effectively our solutions apply well in practice. For instance, it could instruct you on how best you could tighten a bolt or even do your laces, but even if it had a body and were in a robotics form it would not be able to do the actions itself.
Someday, AI robots will be able to do everything including helping with home chores, at the workplace among other areas. Robot dexterity research, including the efficient and general learning approaches, that we have looked into today will assist in making that future a reality.