July 19, 2024


Demand for industrial robots is rising in Europe, especially driven by the benefits of automation. However, there are ongoing challenges in ensuring seamless collaboration with humans while maintaining safety.

To address this issue, a consortium of European universities, technology accelerators, and private research labs is launching the RoboSAPIENs project.

The aim is to build the necessary safety mechanisms with a particular focus on adaptive industrial robots — a category of autonomous robots that can learn new behaviours without being reprogrammed and adapt to changes in their system structure or environment.

One eye-catching ambition targets a big problem for adaptive robots: handling unexpected changes. These changes range from software updates and hardware wear to unexpected obstacles and interaction with humans. When they arise, the researchers want the to robots automatically adjust their controllers and settings.

The missing safety step in adaptive industrial robots

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Adaptive robots typically behave by continuously monitoring their environment. They collect data from their surrounding, analyse the data, and change their plans accordingly. In the process, they accumulate new knowledge.

“This is called a MAPE-K (Monitor-Analyze-Plan-Execute-Knowledge) control loop,” Ana Cavalcanti, a partner on RoboSAPIENS and computer science professor at the University of York, tells TNW. “What is missing here is the inclusion in the loop of a step that verifies whether certified guarantees, of safety or trustworthiness, are kept when plans change.”

RoboSAPIENS aims to add this missing step in robotic self-adaptation. The step will check if, after (re)planning, the rules set during certification are still valid. If they’re not, the system will automatically propose an update. A trustworthiness checker will then ensure that any changes can be safely implemented.

Cavalcanti provides the example of a mobile robot that has a map of its environment. The opening or closing of a door could change the optimal path, requiring a map update. “This, however, should not be done if the robot is busy executing some time-critical operation: attending to the urgent needs of a user, for instance,” she says.

Tapping deep learning

To introduce the additional safety step in the control loop, the team will tap deep learning.

Using deep learning in adaptive robotics comes with a level of uncertainty, depending on the neural network’s ability to respond to its environment during the robot’s operation. The project will develop new technologies to quantify that uncertainty. If the measurement shows that the uncertainty is too high, the system may trigger relearning.

After relearning, the system will reduce any new uncertainty and ensure that the expected behaviours are still trustworthy.

RoboSAPIENS consortium team for industrial robots project