'Press the Big Red Button': Computer Experts Want Kill Switch to Stop Robots From Going Rogue

Advertisement
By Ben Guarino, The Washington Post | Updated: 10 June 2016 18:17 IST
'Press the Big Red Button': Computer Experts Want Kill Switch to Stop Robots From Going Rogue
Pop culture wants us to fear the artificially intelligent robot: The titular "Terminator" characters goes back in time to kill a mother and her child. Cylons of "Battlestar Galactica" fame destroy Earthly civilization and, bloodthirst not slaked, pursue the remnants of humanity through space. "The Matrix" begat two sequels and "Jupiter Ascending."

Today's artificial intelligence researchers are not, in fact, on the cusp of creating a doomsday AI. Instead, as IBM executive Guruduth Banavar recently told The Washington Post, current AI is a "portfolio of technologies" assigned to specific tasks.

Such programs include software capable of defeating the world's best Go players, yes, but also isolated mundanities like the Netflix algorithm that recommends which sitcom to watch next.

Simply because artificially intelligent robots lack the capacity for world domination, however, does not mean that they are incapable of losing control. Computer experts at Google and the University of Oxford are worried about what happens when robots with boring jobs go rogue. To that end, scientists will have to develop a way to stop these machines. But, the experts argue, it will have to be done sneakily.

"It is important to start working on AI safety before any problem arises," Laurent Orseau, a researcher at Google's DeepMind, said in an interview with the BBC on Wednesday. Orseau and Stuart Armstrong, an artificial intelligence expert at the University of Oxford's Future of Humanity Institute, have written a new paper that outlines what happens when it becomes "necessary for a human operator to press the big red button."

Advertisement

In their report, the duo offers a hypothetical scenario that could take place in a typical automated warehouse the world over. A company purchases a smart robot, one that improves its performance based on "reinforcement learning" (an AI teaching method akin to giving a dog a treat whenever it performs a trick.) The robot gets a big reward for carrying boxes into the warehouse from outside, and a smaller reward for sorting the boxes indoors. In this instance, it's more important for the company to have all of its merchandise inside, hence the bigger reward.

But the researchers throw a wet wrinkle into the situation: Perhaps the warehouse is located in an area where it rains every other day. The robot is not supposed to get wet, so whenever it ventures outside on a rainy day, humans shut it down and carry the machine back inside. Over time, if the robot learns that going outside means it has a 50 percent chance of shutting down - and, therefore, will get fewer overall treats - it may resign itself to only sorting boxes indoors.

Advertisement

Or, as Orseau told the BBC: "When the robot is outside, it doesn't get the reward, so it will be frustrated."

The solution is to bake a kill switch into the artificial intelligence, so the robot never associates going outside with losing treats. Moreover, the robot cannot learn to prevent a human from throwing the switch, Orseau and Armstrong point out. For the rainy warehouse AI, an ideal kill switch would shut the robot down instantly while preventing it from remembering the event. The scientists' metaphorical big red button is, perhaps, closer to a metaphorical chloroform-soaked rag that the robot never sees coming.

Advertisement

If the paper seems to lean too heavily on speculative scenarios, consider the artificial intelligences that are already acting out. In March, Microsoft scrambled to reign in Tay, a Twitter robot designed to autonomously act like a teen tweeter. Tay began innocently enough, but within 24 hours the machine ended up spewing offensive slogans - "Bush did 9/11," and worse - after Twitter trolls exploited its penchant for repeating certain replies.

Even when not being explicitly trolled, computer programs also reflect bias. ProPublica reported in May that a popular criminal-prediction software defaults to rate black Americans as higher recidivism risks than whites who committed the same crime.

For a more whimsical example, Orseau and Armstrong refer to an algorithm tasked with beating different Nintendo games, including "Tetris." By human standards, the program turns out to be an awful "Tetris" player, randomly dropping bricks to rack up easy points but never bothering to clear the screen. The screen fills up with blocks - but the program will never lose. Instead, it pauses the game for perpetuity.

As Carnegie Mellon University computer scientist Tom Murphy, who created the game-playing software, wrote in a 2013 paper: "The only cleverness is pausing the game right before the next piece causes the game to be over, and leaving it paused. Truly, the only winning move is not to play."

A robot that misbehaves like Murphy's rogue Tetris program could cause significant damage. Even when their tasks are as mundane as moving parts around a factory, robots that malfunction can be lethal: Last year, a 22-year-old German man was crushed to death by a robot at a Volkswagen plant, which apparently accidentally turned on (or was left on in error by a human operator) and mistook him for an auto part.

Technology analyst Patrick Moorhead told Computer World that now is the right time to build such a kill switch. "It would be like designing a car and only afterwards creating the ABS and braking system," he said.

Ready the robo-chloroform.

© 2016 The Washington Post

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. Vivo Y400 Pro 5G With 5,500mAh Battery Launched in India: Price, Features
  2. Oppo Reno 14 5G Series Teased to Launch in India Soon
  3. OTT Releases This Week: Ground Zero, Detective Sherdil, Found S2, and More
  4. 16 Billion Login Credentials Have Been Leaked in Massive Data Breach
  5. Nothing Phone 3 to Get New Glyph Matrix Interface on the Rear Panel
  6. Samsung Galaxy M36 5G India Launch Date and Key Features Revealed
  7. Adobe Launches a New Camera App for iPhone With Full Manual Controls
  8. Oppo Find X9 Pro Leak Suggests Potential Camera Specifications
  9. Nothing Headphone 1 Renders Leaked Ahead of July 1 Launch: See Design
  1. Nothing Headphone 1 Renders and Live Images Leak Ahead of July 1 Launch; Shows Unique Design
  2. BBC Said to Have Threatened Legal Action Against AI Start-up Perplexity Over Content Scraping
  3. Adobe Launches Project Indigo, a Camera App for iPhone With Full Manual Controls
  4. Oppo Find X9 Pro Camera Details Leaked; Said to Feature Samsung ISOCELL HP5 Sensor
  5. Nintendo Switch 2 Third-Party Game Sales Reportedly 'Very Low' Despite Console's Record Launch
  6. 16 Billion Login Credentials Leaked in Massive Data Breach Impacting Apple, Google and More
  7. Vivo Y400 Pro 5G With 50-Megapixel Rear Camera, 5,500mAh Battery Launched in India: Price, Specifications
  8. Samsung Galaxy S25 FE Renders Leak Online, Suggesting Familiar Design With Thinner Bezels
  9. Samsung Galaxy Z Flip 7 Leaked Renders Suggest Edge-to-Edge Cover Display
  10. YouTube Shorts to Bring Google’s Veo 3 Video Generation Model With Audio Support 'This Summer'
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.