Robotic leg learns walk, limb that can teach itself how to move

0
256

Robotic leg learns walk, limb that can teach itself how to move.

Most robots have to be programmed to perform specific, repetitive tasks—from making coffee and pizza to giving high fives. But what if robots could improve over time, like toddlers who turn unsteady steps into confident sprints.

That’s the goal of two USC researchers who built a 3-tendon, 2-joint robotic limb that can teach itself how to move through trial and error. “We want to reverse-engineer brains and bodies and create awesome robots,” says Dr. Francisco Valero-Cuevas, professor of Biomedical Engineering and professor of Biokinesiology & Physical Therapy.

He and USC Viterbi School of Engineering doctoral student Ali Marjaninejad just published a paper on their work, which made the cover of the March issue of Nature Machine Intelligence. We spoke to them ahead of its publication; here are edited and condensed excerpts of our conversation.

READ  Arsenal’s Provide For Houssem Aouar Not Excessive Sufficient, Lyon President Suggests –

Can you explain how your new robotic limb ‘learns’ to walk?
[Ali Marjaninejad] We call this algorithm General-to Particular or G2P because we begin by letting the system play at random to internalize the general properties of the leg, like children [learning to walk]. We then give it a reward every time it approaches good performance of a given task. In this case, moving the treadmill forward. This is called reinforcement learning as it is similar to the way animals respond to positive reinforcement.

READ  Top: Friends and Family Search for 22-Year-Old Woman Who Disappeared in Shadow Hills

Talk us through the experiment, which uses something called motor babbling, like newborn ponies who ‘figure out’ how to run ASAP to avoid predators?
[AM] The process is two-step: first babble and then perform. But in more detail, this has interesting consequences. First, it allows fast learning of good-enough solutions—like ponies who need to walk ASAP. At another level, the motor babbling is similar to how animals train lower parts of the nervous system such as the spinal cord, which is what controls muscles directly. So the babbling allows the creation of a pre-tuned system that a “high-level” controller can then use—like how your brain uses your spinal cord to control your body. If you combine these two, then the robot will learn to walk quickly, even if not very well. Subsequently, the algorithm will continue to refine how to exploit the complex dynamics of the system. It will continue to learn to improve its performance from every time it does the task, just like you and I do.

READ  More Than 70,000 Covid-19 Patients Have Received Plasma. Does It Help? –

And this is significantly different to current robotic controllers?
[AM] Yes. This is in contrast to how robots are controlled today, mostly relying on exact equations, sophisticated computer simulations, or thousands of repetitions to refine a task. Nature does not have this luxury of time; animals need to learn quickly to do things well enough to live another day.

LEAVE A REPLY

Please enter your comment!
Please enter your name here