Google Mastered a Game That Vexed Scientists - and Their Machines - for Decades

Advertisement
By Matt McFarland, The Washington Post | Updated: 1 February 2016 16:16 IST
Artificial intelligence took a historic step forward last week when a Google team announced that it taught a machine to master the ancient Chinese game Go, a feat researchers have chased for decades.

While computers learned to outclass humans at checkers and chess in the '90s, Go - a 2,500-year-old game - was still vexing computer scientists. Because the game offers players a nearly infinite number of moves - and is difficult to score in the middle of a match - it has proved to be the most difficult of classic games to teach computers to play.

But that all changed last week as Google's researchers brought a fresh approach and wealth of computing power to findings published in the scientific journal Nature.

"It's a real milestone and surprise for me how quickly things have happened," said Martin Muller, a professor at the University of Alberta and longtime researcher of Go. A decade ago, his work helped computers draw closer to the caliber of human players, which Google then used in its approach. The company's researchers "have these new ideas, and they showed they're very effective."

Advertisement

The Google team hopes that in the long term, the technology behind the breakthrough can be applied to society's most challenging problems, including making medical diagnoses and modeling climates.

Advertisement

Such efforts are years away, the researchers admit. In the near term, they're looking to integrate the work into smartphone assistants - think of iPhone's Siri or Google's voice assistant.

In Go, players place black and white stones on a grid to spread across open areas and surround their opponent's pieces. If you surround your foe's stone, it's removed from the board. The player with the most territory wins.

Advertisement

Google's system swept the European Go champion, Fan Hui, 5-0, in a match refereed by the British Go Association. It is the first time a computer has beaten a professional player in a game on a full-size board, without a handicap. (The game is sometimes played on a smaller board with fewer squares, which is easier for a machine to master.) Google's technology relied on the strength of more than 1,200 cloud computers in warehouses around the globe.

Google's system was trained on 30 million moves players made in actual games of Go. Then the system began to play games against itself, using trial and error, to recognize which moves work in a given situation and which don't. While a human may master Go with thousands of games of experience, the computer system relied on millions of matches.

Advertisement

Last week's feat has drawn comparisons to when IBM's Deep Blue computer beat chess champion Garry Kasparov in 1997. It also brings to mind IBM Watson's system, which has trumped humans at "Jeopardy."

Like Deep Blue, Google's system relies on its ability to process millions of scenarios. But Google's computers do more than just memorize every possible outcome. They learn through trial and error, just like humans do. That makes the innovation more applicable to a wide array of tasks. Google showed the power of this approach last year when one of its systems taught itself to be better at Atari games than humans.

"My dream is to use these types of general learning systems to help with science," said Demis Hassabis, who leads DeepMind, the London-based Google team behind the findings. "You can think of AI scientists or AI-assisted science working hand in hand with human expert scientists to help them in a complementary way to make faster breakthroughs in scientific endeavors."

While Deep Blue's defeat of Kasparov drew plenty of headlines, the science behind it has not had broad implications for humanity in the 19 years since.

"This feels like it could be different, because there's more generality in the methods," Muller said. "There's potential to have applicability to many other things." He also cautioned that just as Go was significantly tougher than mastering chess, making predictions in real-world situations will bring another challenge for Google's researchers.

In March, the Google system - called AlphaGo - will take on Lee Sedol, arguably the top Go player in the world, in a five-game match in Seoul. That could be its "Kasparov moment."

"I heard Google DeepMind's artificial intelligence is surprisingly strong and getting stronger," Sebol said in a statement. "But I am confident that I can win at least this time."

© 2016 The Washington Post

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Infinix Note 60, Note 60 Pro, Note 60 Ultra May Be Sold in These Variants
  2. Nothing Phone 4a Series Tipped to Launch Globally on This Date
  3. Motorola Razr 70 Could Launch Soon as Foldable Phone Visits TDRA Database
  4. The Sun Is Erupting: A Massive Sunspot Is Firing Powerful Solar Storms Toward Earth
  5. Parasakthi OTT Release Revealed: When and Where to Watch it Online?
  6. Oakley Meta Glasses Now Available in India for Athletes
  1. AI Identifies More Than 1,300 Unusual Objects in Hubble Space Telescope Images
  2. Scientists Track Rapidly Growing Sunspot Behind Intense Solar Storms Toward Earth
  3. Motorola Razr 70 Global Launch Seems Imminent as Foldable Phone Visits UAE’s TDRA Certification Database
  4. Crypto Wrench Attacks Surged in 2025, Total Recorded Losses Hit $41 Million: Report
  5. Philips TAA1009 In-Ear, SHP9500 Headphones Launched in India Alongside New Soundbar, Speaker Models
  6. Supreme Court Questions WhatsApp Policy of Sharing User Data With Meta Entities
  7. Nintendo Switch Becomes Best-Selling Nintendo Console Ever; Switch 2 Sales Cross 17 Million Units
  8. NASA’s Perseverance Makes History on Mars with Claude AI at the Helm
  9. Redmi K90 Ultra Tipped to Launch With Dimensity 9500 Chip, Active Cooling Fan
  10. Mozilla Firefox Will Let You Decide How Much AI You Want in Your Browser
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.