Google Mastered a Game That Vexed Scientists - and Their Machines - for Decades

Advertisement
By Matt McFarland, The Washington Post | Updated: 1 February 2016 16:16 IST
Artificial intelligence took a historic step forward last week when a Google team announced that it taught a machine to master the ancient Chinese game Go, a feat researchers have chased for decades.

While computers learned to outclass humans at checkers and chess in the '90s, Go - a 2,500-year-old game - was still vexing computer scientists. Because the game offers players a nearly infinite number of moves - and is difficult to score in the middle of a match - it has proved to be the most difficult of classic games to teach computers to play.

But that all changed last week as Google's researchers brought a fresh approach and wealth of computing power to findings published in the scientific journal Nature.

"It's a real milestone and surprise for me how quickly things have happened," said Martin Muller, a professor at the University of Alberta and longtime researcher of Go. A decade ago, his work helped computers draw closer to the caliber of human players, which Google then used in its approach. The company's researchers "have these new ideas, and they showed they're very effective."

Advertisement

The Google team hopes that in the long term, the technology behind the breakthrough can be applied to society's most challenging problems, including making medical diagnoses and modeling climates.

Advertisement

Such efforts are years away, the researchers admit. In the near term, they're looking to integrate the work into smartphone assistants - think of iPhone's Siri or Google's voice assistant.

In Go, players place black and white stones on a grid to spread across open areas and surround their opponent's pieces. If you surround your foe's stone, it's removed from the board. The player with the most territory wins.

Advertisement

Google's system swept the European Go champion, Fan Hui, 5-0, in a match refereed by the British Go Association. It is the first time a computer has beaten a professional player in a game on a full-size board, without a handicap. (The game is sometimes played on a smaller board with fewer squares, which is easier for a machine to master.) Google's technology relied on the strength of more than 1,200 cloud computers in warehouses around the globe.

Google's system was trained on 30 million moves players made in actual games of Go. Then the system began to play games against itself, using trial and error, to recognize which moves work in a given situation and which don't. While a human may master Go with thousands of games of experience, the computer system relied on millions of matches.

Advertisement

Last week's feat has drawn comparisons to when IBM's Deep Blue computer beat chess champion Garry Kasparov in 1997. It also brings to mind IBM Watson's system, which has trumped humans at "Jeopardy."

Like Deep Blue, Google's system relies on its ability to process millions of scenarios. But Google's computers do more than just memorize every possible outcome. They learn through trial and error, just like humans do. That makes the innovation more applicable to a wide array of tasks. Google showed the power of this approach last year when one of its systems taught itself to be better at Atari games than humans.

"My dream is to use these types of general learning systems to help with science," said Demis Hassabis, who leads DeepMind, the London-based Google team behind the findings. "You can think of AI scientists or AI-assisted science working hand in hand with human expert scientists to help them in a complementary way to make faster breakthroughs in scientific endeavors."

While Deep Blue's defeat of Kasparov drew plenty of headlines, the science behind it has not had broad implications for humanity in the 19 years since.

"This feels like it could be different, because there's more generality in the methods," Muller said. "There's potential to have applicability to many other things." He also cautioned that just as Go was significantly tougher than mastering chess, making predictions in real-world situations will bring another challenge for Google's researchers.

In March, the Google system - called AlphaGo - will take on Lee Sedol, arguably the top Go player in the world, in a five-game match in Seoul. That could be its "Kasparov moment."

"I heard Google DeepMind's artificial intelligence is surprisingly strong and getting stronger," Sebol said in a statement. "But I am confident that I can win at least this time."

© 2016 The Washington Post

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. OnePlus 15R Storage Options Leaked: Here's How Much It Might Cost in India
  2. RAM Crisis 2026: 16GB Phones Out, 4GB Models Making a Comeback
  3. Pixel 10 Series Gets Price Cuts During Google's End of Year Sale: See Offers
  4. MacBook Air (2025) With M4 Chip Available at This Discounted Price
  5. Game of the Year Winner Clair Obscur: Expedition 33 Gets New Major Update
  6. Mrs Deshpande OTT Release Date: Madhuri Dixit's Starrere to Premiere on This Date
  7. James Webb Telescope Finds Thick Atmosphere on Ultra-Hot Lava Planet TOI-561 b
  1. Webb Telescope Discovers Hidden Atmosphere on Molten Super-Earth TOI-561 b Despite Extreme Heat
  2. Astronomers Watch a Dormant Neutron Star Reignite After a Decade of Silence
  3. Predictive Forecasting Tools Can Boost the Success of Clean Energy Investments Worldwide
  4. Chinese Spacecraft Nearly Slammed Into Starlink Satellite, SpaceX Reveals
  5. Clocks on Mars Run Faster Than on Earth, New Study Finds
  6. The Hunting Wives Out on OTT: Know Everything About This American Thriller Mystery Series
  7. All Her Fault Now Streaming on JioHotstar: Know Everything About This Thriller Series
  8. Wednesday Season 3 Set for July 2027 on Netflix: Jenna Ortega Returns as the Iconic Addams Heir
  9. Lakshmi Manchu’s Daksha: The Deadly Conspiracy Available for Streaming on Amazon Prime Video
  10. Posthouse Now Available to Stream on Netflix: Know Everything About This Psychological Thriller Film
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.