As if we didn’t have enough real problems staring us in the face, a recent wave of concern about something that, assuming no undisclosed breakthroughs have been made, is not really an immediate worry has fascinated the media. Artificial intelligence (AI) and the threat it poses to humanity’s long term existence. Bill Gates, Elon Musk and Stephen Hawking have each expressed their fear that we are about to open a new Pandora’s Box of evil if we are not cautious in our efforts to create computers capable of doing what humans are – truly thinking. And their not coming up with happy thoughts.
Now, I’m not sure if these gentlemen are all just going to the movies too often or are indeed privileged to results of lab work on AI that the rest of us are not, but gosh guys, settle down. You’ve been the cheerleaders for science and technology for so long now that when you get scared we can end up a little terrified. Besides, there are several problems I can see with your computers and robots eliminating us from the face of the Earth worries.
First, there would seem to be the big question of whether we can even create true AI in computers. Yes, we can give them the ability to play one heck of a game of chess. Yes, provided with all the information on streets and weather and traffic they can guide the most inept traveler to his destination. Yes, they can process computations at light speed which humans can’t. But these are all the actions of a well trained servant, not a master.
Second, why do we assume that the ability to think at a faster, more informed level would be the key to great evil? Were that the case, we should be watching each of you very carefully for indications of desire to rule the world. It is an unfortunate reality that we have been taught to suspect those who have greater capacity for thought and to fear their actions.
Third, if we are so brilliant as to be able to create true capacity for thought in our devices and still fear their ability to not only serve but adapt, create and dominate, it would seem a simple solution would be to create a fail-safe requirement. A sort of Turing test. But in this case it would not be to tell whether a computer was the equivalent of a human in its responses. Rather, it would be a test to see what the device could do with the knowledge it had processed. A “Hawking” test perhaps. An assurance that, although the devices had reached a superior level of efficient thought, they were really, without the aid of human activity, unable to act upon any untoward notions.
This, unfortunately, leads us to the one really valid portion of your fears. The involvement of the human element. We have throughout our history done an incredibly good job of trying to remove ourselves from the picture. We are great at war and everyday homicide. We have too much desire for us and too little sympathy for them. We are our own greatest enemy and, sadly, probably our only savior.
If you are so concerned about intelligence, might I encourage you to worry less about the “artificial” and more about inspiring the supposed original version. Failing that, we are, I fear, ultimately on the road to destruction long before the rise of the intelligent machine.