Will AI Go Nuclear?By Samuel Greengard | Posted 2014-08-14 Email Print
Learn How a Virtual Networking Approach Can Strengthen the Security of Federal Networks REGISTER >
Is artificial intelligence more dangerous than nukes? Could AI replace and even eradicate humans? Some experts say these possibilities are not that far-fetched.
A few days ago, billionaire inventor Elon Musk—PayPal pioneer and the man behind Tesla Motors—boldly stated that he is deeply afraid of artificial intelligence because it is "potentially more dangerous than nukes." He followed up the statement with a tweet that read: "Hope we're not just the biological boot loader for digital super-intelligence. Unfortunately, that is increasingly probable."
It's easy to dismiss Musk's dystopian comments as paranoia, but they actually make a lot of sense. The idea that machines could take over isn't novel. In fact, the theme presents itself in plenty of novels and sci-fi movies. Even the idea that they could replace and eradicate humans isn't improbable, especially if you think along the lines of terrorism and purposeful malware that might be used to infect other machines.
This isn't a conspiracy theory, and it certainly isn't lunacy—though, at least for now, a machine takeover remains extremely far-fetched. Swedish philosopher Nick Bostrom has forwarded compelling questions about what happens when machine intelligence surpasses humans in his book, Superintelligence: Paths, Dangers, Strategies.
Others, such as Ray Kurzweil, have adopted a decidedly more optimistic view. Even so, he too has stated that technology could wipe out humanity.
One thing is certain: Despite years of discussion about AI, it is finally beginning to take shape. Siri and Google Now are becoming smarter and better. IBM's Watson, which has a remarkable ability to learn, is already redefining everything from weather forecasting to business analytics.
However, since a digital crystal ball does not yet exist, we'll have to wait for the future to unfold to receive our answers about artificial intelligence.
For now, business and IT leaders, software developers and others should carefully consider the implications and ramifications of AI technology. As we have seen time and time again, too many companies rush too many products to market without adequate consideration for security and outcomes.
Automobiles and medical devices are now targets. Utility grids and traffic systems have already been exploited. Meanwhile, the Internet of things is opening a connected door to a level of online hacking, attacking and cyber-theft that is almost incomprehensible.
Perhaps it's time for humans to begin thinking more intelligently—rather than artificially—about the future.