In part four of the Ted Talk Radio Hour episode Future Consequences, philosopher and neuroscientist Sam Harris expounds upon the potential evolution of artificial intelligence. As a renowned member of the four horsemen, Harris helped to repudiate my fear that a secular world is one devoid of meaning or spirituality. His teachings are both inspired and practical, and sometimes even disconcerting.
In his most recent Ted Talk, Harris began by demarcating our terror of inevitable things like global famine or pandemics from our excitement in anticipation of the equally inevitable “death by science fiction.” It’s this key difference that makes the latter that much more perilous. We will by any means necessary continue to fortify the advancement of technology even at our own expense.
When we eventually reach a point where the machines we build are capable of exploring the kind and degree of intelligence that is inconceivable to us, the outcome can't really begin to be surmised, except to say it will affectionately deem humanity feckless. The celerity at which electronic circuits function juxtaposed with biochemical ones will give way to a creation that will be unable to be constrained. The end of human labor will ultimately lead to an extreme disparity between the incredibly wealthy and the starving.
Harris believes the time-horizon counterpoint to be irrelevant. He explains:
“If intelligence is just a matter of information processing and we continue to improve our machines, we will produce some form of super intelligence and we have no idea how long it will take to create the conditions to do that safely. Fifty years is not what it used to be. Fifty years is not that much time to meet one of the greatest challenges our species will ever face.”
Harris goes on to say that he doesn’t have a solution to the AI problem, except to suggest that we all think about it more. That was the most compelling aspect of the Ted Talk — in a strong field. There are undoubtedly ostentatious effects, but excitement we all share at the thought of replicating consciousness and intelligence makes us all too eager to overlook the endless negative ones. Harris continues:
“I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build, because I think we will inevitably do that, but to understand how to avoid an arms race and to build it in a way that’s aligned with our interest.”
Furthering development of artificial intelligence risks us turning our backs on theology. It’s understanding that the peak of intelligence and cognition is far beyond the reach of biological minds and that we are currently in the “process of building some kind of god.” Harris ends effectively, “Now would be a good time to make sure it's a god we can live with."