If You Could Read My Mind

By Samuel Greengard Print this article Print

Humans have risen to the top because we're the smartest creatures on this planet. But what happens when a silicon-based species is smarter and less fragile?

It's abundantly clear that we're careening into a new era of information technology. Artificial Intelligence (AI) is advancing in quantum leaps.

In late January, Google announced that its DeepMind computer had annihilated a human champion in the ancient Chinese game of Go, which is far more complex than chess. Some experts say that this advance in neural network technology represents an even bigger achievement than when IBM's Deep Blue beat world chess champion Garry Kasparov in 1997.

Now there's word that robots will likely be able to read our minds by 2030. Nita Farahany, a professor of law and philosophy at Duke University, stated at the 2016 World Economic Forum (WEF) that brain function combined with an EEG device (which detects and records electrical activity in the brain) could be used to unlock and operate computers and other electronic devices. For instance, a user might think of a song or an object, and the device would recognize it and unlock itself.

While this sounds somewhere between incredible and mind-bending—after all,  it could introduce the possibility of unhackable authentication—it also raises a hornet's nest of concerns. Based on today's attacks on government and corporate systems, and data that's increasingly accessible through clouds, it's reasonable to wonder whether the technology would raise the stakes on data privacy, security and warfare to new, completely unimaginable levels.

Of course, between now and 2030, machines could be running our businesses, thus unleashing wave after wave of unemployment. A WEF report, "The Future of Jobs," predicts that as many as 7.1 million jobs could vanish by 2020, while 2.1 million new jobs could be created—primarily in highly specialized areas, such as computing, mathematics, architecture and engineering. It doesn't take a certified accountant to figure out there's a fundamental problem here.

Where will all of this take us? Stephen Hawking, Elon Musk and others are now warning about AI surpassing human intelligence over the next few decades. Of course, at a certain point, the question will become whether humans are even necessary and have any value in a world where machines can do just about everything better.

Humans have risen to the top of the evolutionary pyramid because we're the smartest creatures on this planet. But what happens when another species—one that's silicon based on top of it—is smarter and less fragile? Then all bets are off. Musk has even gone so far as to question whether humans will simply become a "biological boot loader for digital superintelligence."

Sometimes, a series of incremental gains eventually lead to a net loss. Let's hope this isn't the case with AI.

This article was originally published on 2016-02-19
Samuel Greengard writes about business and technology for Baseline, CIO Insight and other publications. His most recent book is The Internet of Things (MIT Press, 2015).
eWeek eWeek

Have the latest technology news and resources emailed to you everyday.