Skip to content Skip to sidebar Skip to footer
Designing a multi-robot team that shares situational awareness

Designing a multi-robot team that shares situational awareness

The capabilities of collaborative robots, commonly referred to as “cobots,” could soon expand beyond direct human-robot interaction to include heterogeneous robots intelligently coordinating and interacting with each other.

New research from the University of Massachusetts Amherst is proving robots can be programmed to create their own teams and voluntarily wait for their teammates to execute a set of dependent tasks. This new multi-team robot collaboration has the potential to improve manufacturing and warehouse productivity.

The research, funded by the U.S. Defense Advanced Research Projects Agency (DARPA) Director’s Fellowship and a U.S. National Science Foundation Career Award, is led by Hao Zhang,  associate professor in the UMass Amherst Manning College of Information and Computer Sciences and director of the Human-Centered Robotics Lab. Zhang’s research on “autonomous group introspective learning and coopetition for cross-capability multi-robot adaptation,” uses lessons from the social psychology of humans to help teams of robots with different capabilities work together and adapt to complex situations.

Hao Zhang

When Zhang was awarded the DARPA prize last year to improve robot teamwork, he was already focusing on two main areas – group introspection and cooperative competition, dubbed “coopetition.”

Group introspection would allow robots in a team to be aware of all their other team members, so they have a shared situational awareness of the overall team’s capabilities. To accomplish that, Zhang is modeling robots in a team as a graph to enable team awareness and using conditional models that identify backup robots with similar capabilities to replace failed teammates.

They are solving the competition aspect by simultaneously modeling cooperation at the team level and competition at the individual level. Cooperation tackles tasks that are infeasible for individual robots to solve, while competition encourages each robot to perform better and adapt faster.

Robot mix and match 

In a warehouse or manufacturing setting there may be many different types of robots and payload capacities: fixed in place robotic arms, mobile automated guided vehicles (AGVs), heavy-lifting palletizers, etc. The challenge, however, is coordinating a diverse set of robots for a common purpose.

“There’s a long history of debate on whether we want to build a single, powerful humanoid robot that can do all the jobs, or we have a team of robots that can collaborate,” Zhang said in a statement. “Robots have big tasks, just like humans. For example, [if] they have a large box that cannot be carried by a single robot, the scenario will need multiple robots to collaboratively work on that.”

The other behavior is voluntary waiting. “We want the robot to be able to actively wait because, if they just choose a greedy solution to always perform smaller tasks that are immediately available, sometimes the bigger task will never be executed,” Zhang explains.

As a solution, Zhang created a learning-based approach for scheduling robots called learning for voluntary waiting and subteaming (LVWS) coupled with a graph attention transformer network (GATN) that computes rewards for scheduling tasks to robots.  LVWS includes nodes (robots) and edges (communication, relationships, or spatial positions).

According to the research, collaborative scheduling is formulated as a bipartite matching problem where robots are assigned to tasks. These tasks are put into a GATN that integrates graph attention networks to encode the local graph structure and transformers to encode contextual information. The resulting outputs are embeddings for each node, as well as global embedding for each graph which are used to compute a reward matrix used to perform bipartite matching.

To test their LVWS approach, the researchers gave six robots 18 tasks in a computer simulation and compared their LVWS approach to four other methods. In this computer model, there is a known, perfect solution for completing the scenario in the fastest amount of time. They ran the different models through the simulation and calculated how much worse each method was compared to this perfect solution, a measure known as suboptimality.

A demonstration of the LVWS method in a manufacturing assembly case study run in a Gazebo simulation.

The comparison methods ranged from 11.8% to 23% suboptimal. The new LVWS method was 0.8% suboptimal. “So, the solution is close to the best possible or theoretical solution,” said Williard Jose, an author on the paper and a doctoral student in computer science at the Human-Centered Robotics Lab, in a statement.

The team has also demonstrated this method running on real-world robots.

Worth the wait

A common question the research team has is, “How does making a robot wait make the whole team faster?”

Jose responds by describing this scenario: There are three robots — two that can lift four pounds each and one that can lift 10 pounds. One of the small robots is busy with a different task and there is a seven-pound box that needs to be moved.

“Instead of that big robot performing that task, it would be more beneficial for the small robot to wait for the other small robot and then they do that big task together because that bigger robot’s resource is better suited to do a different large task,” Jose explained.

Zhang hopes this work will aid the progress of developing teams of heterogeneous robots, particularly as it relates to the scalability of large industry environments that require specialized tasks.

Source link

Leave a comment