Home » Why DeepMind Is Sending AI Humanoids to Soccer Camp

Why DeepMind Is Sending AI Humanoids to Soccer Camp

by Shashank
0 comment
top3bestrated

“It didn’t really work,” says Nicolas Heess, a DeepMind researcher and one of the paper’s co-authors. Due to the complexity of the problem, the vast range of options available, and the lack of prior knowledge of the task, agents had no idea where to start.

Instead, Heess, Lever, and colleagues used neural probabilistic motion primitives (NPMP). NPMP is a teaching method that brings AI models closer to more human-like movement patterns, and hopefully how this underlying knowledge will help them solve problems. Navigate the virtual soccer field. “It basically biases motor control toward realistic human behavior, realistic human movement,” says Lever. “And that’s what I learned from his motion capture, in this case a human actor playing soccer.”

This “reconfigures the action space,” says Lever. The agent’s movements are already constrained by a human-like body and joints that can only be bent in certain ways, and are further constrained by exposure to data from real humans, to simplify the problem. Helpful. “Trial and error are more likely to discover something useful,” says Lever. NPMP speeds up the learning process. There needs to be a “fine balance” between teaching AI to do things like humans do, while giving it enough freedom to discover its own solutions to problems. .

Basic training was followed by one-person drills (running, dribbling, ball kicking) to mimic how humans learn new sports before diving into full match situations. Reinforcement learning rewards were successfully following the target without the ball or dribbling the ball near the target. This skills curriculum was a natural way to build for increasingly complex tasks, Lever says.

The aim was to encourage agents to reuse skills they may have learned outside the football context within the football environment. This means that different exercise strategies can be generalized and flexibly switched. Agents who mastered these trainings were used as teachers. Just as the AI ​​was encouraged to mimic human motion what it learned from his captures, it was also rewarded, at least initially, for not deviating too far from the strategy the teacher agent used in a given scenario. . “This is really a parameter of the algorithm that is optimized during training,” says Lever. “Over time, they can in principle become less dependent on their teachers.”

Now that you’ve trained your virtual player, it’s time for some competitive action. We start with 2v2 and 3v3 games (and mimic how real young players start with smaller games) to maximize the amount of experience the agent accumulates in each round of the simulation. ). highlight-you can see here—has the chaotic energy of a dog chasing a ball in the park. When a goal is scored, it’s not through complex passing moves, but through hopeful upfield punts and foosball-like rebounds from the back wall.

Source link

You may also like

Top 3 best

Top 3 Best Reated is the Top North American Website, which bring the latest updated and verified reviews to public. Reviews and Ratings which are accurate and verified from source.

Editors' Picks

Latest Posts

Copyright ©️ All rights reserved. | News Bulletin Today

 - 
Arabic
 - 
ar
English
 - 
en
French
 - 
fr
German
 - 
de
Hindi
 - 
hi
Portuguese
 - 
pt
Russian
 - 
ru
Spanish
 - 
es
Turkish
 - 
tr
Ukrainian
 - 
uk