June 25, 2024
UC Berkeley’s transformer-based robot control system generalizes to unseen environments


Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Researchers at the University of California, Berkeley, have created a versatile control system for humanoid robots to adeptly navigate a variety of terrains and obstacles. Drawing inspiration from the deep learning frameworks that revolutionized large language models (LLM), this AI system hinges on a simple principle: studying recent observations can help predict future states and actions. 

The system was trained entirely in simulation but demonstrates robust performance in unpredictable real-world settings. By analyzing its past interactions, the AI dynamically refines its behavior to effectively tackle novel scenarios it never encountered during its training phase.

A robot for all terrains

Humanoid robots, designed in our image, hold the promise of one day becoming valuable assistants, capable of navigating the world and aiding in various physical and cognitive tasks. However, creating versatile humanoid robots has many challenges, including creating a flexible control system.

Traditional control systems in robotics have been notoriously inflexible, often designed for specific tasks and unable to cope with the unpredictability of real-world terrains and visual conditions. This rigidity limits their utility, confining them to controlled environments. 

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

As a result, there has been growing interest in learning-based methods for robotic control. These control systems can dynamically adapt their behavior based on the data gleaned from simulations or direct interaction with the environment.

The new control system created by the scientists at U.C. Berkeley promises to steer humanoid robots through different situations with ease. The system, deployed on Digit, a full-sized, general-purpose humanoid robot, demonstrates remarkable outdoor walking capabilities, navigating reliably across everyday human environments such as walkways, sidewalks, running tracks and open fields. The robot’s adaptability extends to handling various terrains, including concrete, rubber, and grass, without falling. 

“We found that our controller was able to walk over all of the tested terrains reliably and were comfortable deploying it without a safety gantry,” the researchers write. “Indeed, over the course of one week of full-day testing in outdoor environments, we did not observe any falls.”

Moreover, the robot’s resilience to disturbances has been thoroughly tested. It can successfully handle unexpected steps, random objects in its path and even objects hurled in its direction. The robot also withstands being pushed and pulled, maintaining its pose and stability in the face of such disruptions. 

Robot control with transformers

While there are several humanoid robots capable of impressive feats, the interesting aspect of this new system is the process of training and deploying the AI model.

The control model underwent training purely in simulation on thousands of domains and tens of billions of scenarios within Isaac Gym, a high-performance GPU-based physics simulation environment. This extensive simulated experience was then transferred to the real world without the need for further fine-tuning, a process known as sim-to-real transfer. Remarkably, the system demonstrated emergent abilities during real-world operation, handling complex scenarios such as navigating steps, which were not explicitly covered during its training.

At the heart of this system is a “causal transformer,” a deep learning model that processes the history of proprioceptive observations and actions. This transformer excels at discerning the relevance of specific information, such as gait patterns and contact states, to the robot’s observations. 

Transformers, known for their efficacy in large language models, possess an innate capability to predict subsequent elements in extensive data sequences. The causal transformer employed here is adept at learning from sequences of observations and actions, enabling it to predict the consequences of actions with high precision and modify its behavior to attain more favorable future states. This is how it can dynamically adjust its actions based on the landscape, even if it hasn’t encountered it before

“We hypothesize that the history of observations and actions implicitly encodes the information about the world that a powerful transformer model can use to adapt its behavior dynamically at test time,” the researchers write. 

This concept, which they refer to as “in-context adaptation,” mirrors how language models use the context of their interactions to learn new tasks on the fly and dynamically refine their outputs during inference.

Transformers have proven to be superior learners compared to other sequential models such as temporal convolutional networks (TCN) and long short-term memory networks (LSTM). Their architecture allows for scaling with additional data and computational power, and they can be enhanced through the integration of extra input modalities.

The past year has seen transformers become a significant asset to the robotics community, with several models using their versatility to augment robots in various capacities. Benefits of transformers include improved encoding and mixing of different modalities, as well as translating high-level natural language instructions to specific planning steps for robots.

“Analogous to fields like vision and language, we believe that transformers may facilitate our future progress in scaling learning approaches for real-world humanoid locomotion,” the researchers conclude.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link