Watch | Physical Contact—A Touching New Algorithm for Robots
Penn Engineers created a new algorithm allowing robots to react to complex physical contact in real-time, enabling autonomous robots to succeed at previously impossible tasks. Known as “consensus complementarity control” (C3), the algorithm could be an essential building block of future robots that translates directions from the output of artificial AI tools, like LLMs, into action.
“Your large language model might say, ‘Go chop an onion,’” says Michael Posa, Assistant Professor in Mechanical Engineering and Applied Mechanics (MEAM) and a core faculty member of the General Robotics, Automation, Sensing and Perception(GRASP) Lab. “How do you move your arm to hold the onion in place, to hold the knife, to slice through it in the right way, to reorient it when necessary?”
Control, or the intelligent use of the robot’s actuators, is difficult but essential. The first skill humans learn is manipulating objects and moving from one place to another, even in the face of obstacles. Robots work well until they start touching things. The challenge is the contact sequence. William Yang, a recent doctoral graduate of Posa’s Dynamic Autonomy and Intelligent Robotics (DAIR) Lab explained, “Where do you put your hand on the environment? Where do you put your foot on the environment?”
Something as simple as picking up a cup is based on many different choices. Until now, no algorithm has allowed robots to assess all those choices and make an appropriate decision in real time. The research team devised a way to help robots “hallucinate” the different possibilities that might arise when contacting an object. “By imagining the benefits of touching things, you get gradients in your algorithm that correspond to that interaction,” says Posa. “And then you can apply some style of gradient-based algorithm and in the process of solving that problem, the physics gradually becomes more and more accurate over time to where you’re not just imagining, ‘What if I touch it?’ but you’re actually planning to go out and touch it.” Using C3, Yang demonstrated how a robotic arm can safely manipulate a tray, similar to one a server might use at a restaurant.
“This is a building block that can go from a pretty simple specification — make this part go over there — and distill that down to the motor torque that the robot is going to need to achieve that,” says Posa. “Going from a very, very complicated, messy world down to the key sets of objects or features or dynamical properties that matter for any given task, that’s the open question we’re interested in.”
To watch videos of the algorithm and robot in action, see the original story here.