The epistemological tools that al-Haytham advocated took us beyond the superstition, appeals to authority, and baseless speculation that had been the foundation of our so-called knowledge until that point.
Almost every scrap of knowledge we have that is beyond the obvious has been gathered in the thousand years since al-Haythem lived. However, in less than a tenth of that time, the window of human knowledge acquisition will close.
How can I say this? Let’s begin by thinking about what is means to know something. A good working definition is “justified, true belief.” The proposition must be true, but you also must be able to lay out your reasons for it, and those reasons must be sound (justified).
For most of human history, we didn’t actually know much. It’s not that we weren’t right about anything; it’s that when we were right (apples fall to the ground), we often had the wrong reason (apples, being composed of the “earth” element, have an intrinsic goal of seeking the center of the Earth). Our beliefs may have been true, but we didn’t properly justify them.
Artificial intelligence (AI) will soon outstrip our intellectual capabilities in every field. AI is already displaying creativity and subtle understanding in areas that only a few years ago were considered the long-term domain of humans alone.
The real-time, universal language translators some of us saw on Star Trek are available now, in the form of an app for your smartphone. Their results are already within a whisker of human capability and improving fast.
Google’s AlphaGo has beaten Ke Jie, the world champion at Go, a game infinitely more subtle than chess. Those who think AI is not creative might want to speak with the vanquished champion, who said that AlphaGo played “like a god.” Come to think of it, that’s also what Garry Kasparov said when Deep Blue beat him at chess back in 1997.
As an ominous aside, both Ke Jie and Garry Kasparov found the experience deeply unsettling. Jie called it “horrible” and said he would never repeat it. In game 2 of Kasparov’s match, the player many consider the best who has ever lived described going into a “fatalistic depression” once he realized what the computer was capable of.
It has long been predicted that the real sea change will arrive when AI can design the next generation of improved AI. That is the beginning of the Singularity — the inflection point when machine intelligence accelerates like an object being sucked into a black hole (audio version here).
We are already there. Google’s AutoML is an AI that designs neural networks for machine learning. It does it better than human engineers.
With artificial intelligence able to out-think us on every front, including scientific prediction and the design of Nobel-Prize-winning experiments, we will soon have to cede the knowledge-discovery business to our new robot overlords.
More humbling still, we will not be able to justify the knowledge they give us. AI algorithms produce excellent results, but the process is mysterious, usually relying on layers of artificial neurons that are trained to superhuman ability by millions of iterations of raw experience. AlphaGo did not encapsulate invincible Go algorithms designed by humans; we gave it only basic knowledge of the game and then let it play millions of games against itself to hone its skills.
As a result, it acquired an intuition that is far superior to human reasoning and can no more be explained, much less justified, than our intuitions can.
What will happen when it becomes obvious that AIs, although unable to explain their reasoning, are better than we are at running an economy, at formulating medicines, at designing optimal policing strategies, or even understanding human psychology?
If we are smart, we will admit our limitations.
We will revert to “knowing” new things based not on the scientific method, but based on authority. This time, it will not be the authority of our elders or priests, but the authority of a superior, artificial intelligence.
The window of human knowledge will have closed.