For a hot minute last week, it looked like we were already on the brink of killer AI.
Several news outlets reported that a military drone attacked its operator after deciding the human stood in the way of its objective. Except it turned out this was a simulation. And then it transpired the simulation itself didn't happen. An Air Force colonel had mistakenly described a thought experiment as real at a conference.
Even so, fibs travel halfway around the world before the truth laces up its boots and the story is bound to seep into our collective, unconscious worries about AI's threat to the human race, an idea that has gained steam thanks to warnings from two “godfathers” of AI and two open letters about existential risk.
Fears deeply baked into our culture about runaway gods and machines are being triggered — but everyone needs to calm down and take a closer look at what's really going on here.
First, let's acknowledge the cohort of computer scientists who have long believed AI systems, like ChatGPT, need to be more carefully aligned with human values. They propose that if you design AI to follow principles like integrity and kindness, they are less likely to turn around and try to kill us all in the future. I have no issue with these scientists.
But in the last few months, the idea of an extinction threat has become such a fixture in public discourse that you could bring it up at dinner with your in-laws and have everyone nodding in agreement about the issue's importance.
On the face of it, this is ludicrous. It is also great news for leading AI companies, for two reasons:
1) It creates the specter of an all-powerful AI system that will eventually become so inscrutable we can't hope to understand it. That may sound scary, but it also makes these systems more attractive in the current rush to buy and deploy AI systems. Technology might one day, maybe, wipe out the human race, but doesn't that just illustrate how powerfully it could impact your business today?
This kind of paradoxical propaganda has worked in the past. The prestigious AI lab DeepMind, largely seen as OpenAI's top competitor, started life as a research lab with the ambitious target of building AGI, or artificial general intelligence that could surpass human capabilities. Its founders Demis Hassabis and Shane Legg weren't shy about the existential threat of this technology when they first went to big venture capital investors like Peter Thiel to seek funding more than a decade ago. In fact, they talked openly about the risks and got the money they needed.
Spotlighting AI's world-destroying capabilities in vague ways allows us to fill in the blanks with our imagination, ascribing future AI with infinite capabilities and power. It's a masterful marketing ploy.
2) It draws attention away from other initiatives that could hurt the business of leading AI firms. Some examples: The European Union this month is voting on a law, called the AI Act, that would force OpenAI to disclose any copyrighted material used to develop ChatGPT. (OpenAI's Sam Altman initially said his firm would “cease operating” in the EU because of the law, then backtracked.) An advocacy group also recently urged the US Federal Trade Commission to launch a probe into OpenAI, and push the company to satisfy the agency's requirements for AI systems to be “transparent, explainable [and] fair.”
Transparency is at the heart of AI ethics, a field that large tech firms invested more heavily in between 2015 and 2020. Back then, Google, Twitter, and Microsoft all had robust teams of researchers exploring how AI systems like those powering ChatGPT could inadvertently perpetuate biases against women and ethnic minorities, infringe on people's privacy, and damage the environment.
Yet the more their researchers dug up, the more their business models appeared to be part of the problem. A 2021 paper by Google AI researchers Timnit Gebru and Margaret Mitchell said the large language models being built by their employer could have dangerous biases for minority groups, a problem made worse by their opacity, and they were vulnerable to misuse. Gebru and Mitchell were subsequently fired. Microsoft and Twitter also went on to dismantle their AI ethics teams.
That has served as a warning to other AI ethics researchers, according to Alexa Hagerty, an anthropologist and affiliate fellow with the University of Cambridge. “‘You've been hired to raise ethics concerns,'” she says, characterizing the tech firms' view, “but do not raise the ones we don't like.'”
The result is now a crisis of funding and attention for the field of AI ethics, and confusion about where researchers should go if they want to audit AI systems is made all the more difficult by leading tech firms becoming more secretive about how their AI models are fashioned.
That's a problem even for those who worry about catastrophe. How are people in the future expected to control AI if those systems aren't transparent, and humans don't have expertise in scrutinizing them?
The idea of untangling AI's black box — often touted as near impossible — may not be so hard. A May 2023 article in the Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed journal of the National Academy of Sciences, showed that the so-called explainability problem of AI is not as unrealistic as many experts have thought till now.
Technologists who warn about catastrophic AI risk, like OpenAI CEO Sam Altman, often do so in vague terms. Yet if such organizations truly believed there was even a tiny chance their technology could wipe out civilization, why build it in the first place? It certainly conflicts with the long-term moral math of Silicon Valley's AI builders, which says a tiny risk with infinite cost should be a major priority.
Looking more closely at AI systems now, versus wringing our hands about a vague apocalypse of the future, is not only more sensible, but it also puts humans in a stronger position to prevent a catastrophic event from happening in the first place. Yet tech companies would much prefer that we worry about that distant prospect than push for transparency around their algorithms.
When it comes to our future with AI, we must resist the distractions of science fiction from the greater scrutiny that's necessary today.
© 2023 Bloomberg LP
The Motorola Edge 40 recently made its debut in the country as the successor to the Edge 30 that was launched last year. Should you buy this phone instead of the Nothing Phone 1 or the Realme Pro+? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts. Affiliate links may be automatically generated - see our ethics statement for details.