I was amazed to read this week that Stephen Hawking, Bill Gates, and Tesla inventor Elon Musk are all afraid that artificial intelligence poses an existential threat to the human race. Musk’s warning was the most colorful:
With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
Bill Gates chimes in:
I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.
They are worried that artificial intelligence (AI) will advance until AI machines are able to improve their own designs, and build even smarter, more-capable machines, which will be smart enough to build even better ones, and so on. Although biological evolution has taken billions of years to produce humans, the AI stage of evolution will happen very, very quickly.
When AI has transformed our culture so much that present-day people would not recognize it, we will have passed the Singularity.
Hollywood, Hawking, Gates and Musk notwithstanding, I am not afraid of the Singularity. I look forward to it.
Ray Kurzweil’s seminal book on the subject, The Singularity Is Near, is subtitled When Humans Transcend Biology. In his analysis, humans will not be replaced by AI so much as merge with it. At first, the non-biological portion of our intelligence might consist of a specialized module or two. Think of what you could do right now if you only had Wikipedia and a few other reference sources wired directly into your brain. At a minimum, you could win Jeopardy! and make a lot of money to fund your next project.
With our augmented intelligence, we will be able to design even more improvements. Progress will be exponential. Before the midpoint of this century, according to Kurzweil, the biological portion of our intelligence will be insignificant compared to our augmentations.
What we now call artificial life will not exterminate us. It will become a major part of us.
Now I ask, “Why is that so bad?” Why should we cling to the form of existence that has given us global warming along, the science-deniers to make sure it continues, the Islamic State, violence against LGBT people, lynching of African Americans, World Wars II and I, the subjugation of women, a Civil War fought in part to defend the institution of slavery, the burning of heretics in the name of the Prince of Peace, and other instances of insanity stretching as far back as history can see?
Even if a super-smart AI were to have no goal beyond its own survival, could it possibly do any worse?
Beyond that, isn’t there something aesthetically satisfying in letting intelligence bloom? We think we are the bloom, but maybe we’re just the seed.
Ray Kurzweil projects that intelligence will ultimately permeate the universe. Which would be smarter: to embrace that destiny, or to obstruct it? Which would bring more beauty to the cosmos?