• 2024-03-31

DeepMind uses AI robots to play table tennis, defeating real people with a selec

In the field of robotics, motor skills have always been an important touchstone for measuring the capabilities of robots.

Recently, Google's DeepMind research team has made new progress in table tennis. They have developed an artificial intelligence (AI) driven robotic arm that can play table tennis with amateur human players in a competitive manner.

The relevant research paper of this achievement has been published in the form of a preprint. The research team stated, "This is the first robotic agent capable of engaging in a sport with humans at a human level, marking a milestone in the field of robot learning and control."

DeepMind's system combines an ABB company's IRB 1100 industrial robot (arm) and custom AI software.

Advertisement

The physical setup of this system includes a 6-degree-of-freedom robotic arm mounted on two linear tracks, allowing it to move freely horizontally. High-speed cameras are also used to track the position of the ball, while the motion capture system can monitor the racket movements of the human opponent.To create the "brain" that drives a robotic arm, DeepMind researchers have developed a "two-tiered" approach, enabling the robot to play specific table tennis techniques while adjusting its strategy in real-time to adapt to each opponent's playing style.

This method allows the robot to compete against any amateur human player without the need for training against each specific player.

The architecture of the system combines a low-level skill controller (a neural network strategy trained to perform specific table tennis techniques, such as forehand strokes, backhand returns, or serve responses) and a high-level strategic decision-maker (a more complex AI system for analyzing the state of the game, adapting to the opponent's style, and selecting which low-level skill strategy to activate for each incoming ball).

Researchers employed a hybrid approach to train the AI model, using reinforcement learning in a simulated physical environment while building the training data on real-world examples. This technique allows the robot to learn from approximately 17,500 real-world table tennis trajectories.

However, this is a relatively small dataset for such a complex task.The research team employed an iterative process to refine the robot's skills. They began with a small dataset of human-to-human game data, then had the AI compete against real opponents.

Each match generated new data on the trajectory of table tennis balls and human strategies, which the research team fed back into the simulation for further training. This process was repeated for seven cycles, allowing the robot to continuously adapt to increasingly skilled opponents and diverse playing styles.

By the final round, the AI had learned from over 14,000 ball exchanges and 3,000 serves, creating a table tennis knowledge system that helped it bridge the gap between simulation and reality.

It is worth noting that Nvidia is also conducting similar experiments with simulated physical systems, such as the company's AI agent Eureka, which allows AI models to quickly learn to control robotic arms in a simulated space rather than the real world.

This approach could significantly reduce the time and resources required to train robots for complex interactions in the future.In a study involving 29 participants, this AI-driven robot won 45% of the matches, demonstrating a decent amateur-level skill.

 

It is worth noting that it achieved a 100% win rate against beginners and a 55% win rate against intermediate players, only performing poorly when facing experienced opponents.

 

Interestingly, even players who lost to the robot also expressed that they enjoyed the experience. Researchers pointed out: “Across all skill groups and win rates, players agreed that playing against the robot was fun and engaging.” This positive response indicates that AI has a broad application prospect in sports training and entertainment.

 

The system is not without limitations. It struggles with handling very fast balls or high balls, has difficulty recognizing strong spins, and performs weakly in backhand shots. Google DeepMind shared a video showing the robot losing a point due to difficulty in dealing with a fast shot.

 

Researchers pointed out: “To address the latency constraints that hinder the robot's reaction time to fast balls, we suggest researching advanced control algorithms and hardware optimization. This may include exploring predictive models to predict the trajectory of the ball, or adopting faster communication protocols between the robot's sensors and actuators.”In addition to playing table tennis, the technology developed for this project can be applied to a wider range of robotic tasks that require rapid response and adaptation to unpredictable human behavior. Potential applications seem to be extensive, from manufacturing to healthcare.

The Google DeepMind team emphasizes that with further refinement, they believe the system may have the potential to compete with experienced table tennis players in the future.

DeepMind is no stranger to creating AI models capable of defeating human game players, including AlphaZero and AlphaGo. With this latest robotic agent, it appears that the research company is shifting from games to real sports.

Top players in Go, chess, and many video games have already been defeated by AI; perhaps table tennis will be next.

Comment