Google's former chief executive Eric Schmidt recently compared artificial intelligence (AI) to nuclear weapons. Speaking at the Aspen Security Forum on Friday, Mr Schmidt discussed his role at Google and the developments that were happening 20 years ago.
The former Google CEO stated that he himself had been "naive" about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a comparison between AI and nuclear weapons.
Mr Schmidt imagined a near future where the United States and China needed to negotiate an AI agreement. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Mr Schmidt said.
"It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very very concerned that the US view of China as corrupt or Communist or whatever, and the Chinese view of America as failing... will allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that, and yet AI is that powerful," the former Google CEO explained.
Also Read | Machine Learning Technique Used to Enable Teams of Robots to Work Together and Complete Tasks: Details
In simpler words, Mr Schmidt is concerned that there is no system in place for how AI is developed or used. He fears that this might lead to the very real possibility of an escalation in the harmful use of artificial intelligence.
Notably, the capabilities of AI have been previously stated numerous times over the years. According to the Independent, Tesla CEO Elon Musk stated that AI is highly likely to be a threat to humans. More recently, Google fired a software engineer who claimed that its artificial intelligence had become self-aware and sentient.
However, experts have explained that the issue of AI is what it is trained for and how it is used by humans. Meaning if the algorithms that train these systems are based on flawed data, then the results will reflect that.