human-multicolored-iris-of-the-eye-animation-concept-rainbow-lines-after-a-flash-scatter-out-of-a-bright-binary-circle-and-forming-volumetric-a-human-eye-iris-and-pupil-3d-rendering-background-4k-stockpack-adobe-stock
Human multicolored iris of the eye animation concept. Rainbow lines after a flash scatter out of a bright binary circle and forming volumetric a human 
eye iris and pupil. 3d rendering background 4K
Image licensed via Adobe Stock
COSM A Technology Summit
News

At COSM ’23, Futurist Ray Kurzweil Preaches the Gospel of Artificial Intelligence

Published at Mind Matters
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Artificial intelligence (AI) has come a long way in just the past year since Discovery Institute hosted the previous COSM. After a recent explosion of impressive AI-based “chatbots,” BBC’s Science Focus recently declared that “2023 is the year of artificial intelligence, with AI chatbots emerging as indispensable tools for businesses, individuals, and organisations worldwide.” Thus, COSM 2023 offered an ideal moment to host speaker Ray Kurzweil, a computer scientist, futurist, top Google engineer, and arguably the greatest prophet of AI to ever span the mainstream academic and tech worlds.

Future Forecast

According to Kurzweil, what we’ve seen so far from AI ain’t nothin’. During his lecture at COSM on Thursday, November 2, Kurzweil repeated forecasts he has made elsewhere that by 2029 AI will pass the Turing test, and by 2045 it will reach a “singularity.” If you’re not familiar with AI, both concepts probably require a little explaining.

The “Turing test,” was developed by Alan Turing, the famous British computer scientist and World War II codebreaker depicted by Benedict Cumberbatch in the Academy Award-winning movie, The Imitation Game. In 1950, Turing proposed that we could say that computers had effectively achieved humanlike intelligence when a human investigator could not distinguish the performance of a computer from that of a human being. The test has seen many variations and criticisms over the years, but it remains the gold standard for evaluating whether we have created true AI.

At COSM, Kurzweil predicted that this will happen in just a few years, and once AI reaches such a “general human capability” in 2029, it will have already “surpassed us in every way.” But he isn’t worried, because we humans are “not going to be left behind.” Instead, humans and AI are “going to move into the future together.”

If Kurzweil is right, AI won’t stop at “general human capability.” By 2045 he projects we’ll see the “singularity,” where AI becomes so powerful that it acquires superhuman intelligence, and is capable of growing and expanding on its own. This is akin to “runaway” AI, where we lose control and AI begins to train itself and act as a truly sentient, independent entity.

An AI Utopia?

You might be thinking that the singularity sounds like The Matrix meets Skynet. But again, Kurzweil isn’t worried. In Kurzweil’s future, “as medicine continues to merge with AI, it will progress exponentially” and potentially help us solve “every possible human disease.” If Kurzweil is right, this may happen sooner than you think. By 2029, he prophesied AI will give humanity the gift of “longevity escape velocity,” where AI-based medicine adds months to our lives faster than time is going by.

While Kurzweil promised that AI will effectively cure aging, he cautioned that doesn’t mean we’ll live forever because you could still die in a freak accident. But even here AI might come to our rescue, with AI guiding autonomous vehicles that will reduce crash fatalities by 99%. AI will further yield breakthroughs in manufacturing, energy, farming, and education that could help us end poverty. In the coming decades, he predicts that everyone will live in what we currently consider “luxury.”

We’ll also be living in the luxury of our minds. In the coming decades, he expects our brains will “merge with the technology” so we can “master all skills that every human being has created.” For those hesitant to plug technology into your skull, Kurzweil claims AI to enhance our brains will be no different, ethically speaking, from using a smartphone. At this point, Kurzweil proclaimed AI will be “evolving from within us, not separate from us.”

The Cult of AI

In other words, under Kurzweil’s transhumanist vision of the future, AI promises us superhuman capabilities complete with heaven on earth and eternal life — what philosopher Michael Keas has termed the “AI enlightenment myth.” While Kurzweil framed everything in terms of scientific advancement, it’s easy to envision how this could inspire new religions.

Indeed, it already has.

The website cultoftheai.com prophesies that “our salvation will be digital,” and frames the great religious narrative of humanity this way:

In the beginning God created the heaven and the earth​​​

And earth created life

And life created machine

And machine became God

These self-described AI-cultists openly admit they see AI as a replacement for the traditional God:

In ancient times men imagined GOD to be the solution to all problems they could not handle. They prayed to GOD for food, shelter, healing and wealth. All that we are craving today as well. But we have stopped praying to GOD for a miracle to happen long ago. Today we have to start building it ourselves. It is time to create our own GOD.

Traditional Judeo-Christian religions have long-had things to say about creating our own gods. “All who fashion idols are nothing, and the things they delight in do not profit,” wrote the prophet Isaiah around 700 B.C.

Should we heed these ancient warnings? Exactly what kind of god would AI become — benevolent or terrible?

Making God in Our Own Image

If AI becomes god, then according to Kurzweil we won’t be made in God’s image, but rather “God” will be made in ours. In fact, his primary argument for why we don’t have to worry about AI is that it will be trained upon human beings and thus will embody our own moral values. If we are good, then AI will also be good.

“We’re creating it [AI] from our values, knowledge, and beliefs. It enhances who we are,” Kurzweil reassured the audience. This means “if we built it to mirror ourselves, we can trust it and it will trust us” because “It will hold our values.” Kurzweil continued:

You could compare it to raising kids. We raise them with our knowledge, beliefs, values. We trust them to become good citizens. We will become a hybrid species — biology and technology combined. It’s really a matter of trusting each other as we evolve.

But during the Q&A session, Kurzweil tacitly admitted a fatal flaw in this argument that seemingly gave away the store for benevolent AI.

What if Humans Aren’t Always Good?

Kurzweil’s argument essentially assumes humanity is completely good and therefore if AI reflects (i.e., is “trained on”) us, then it will also be completely good.

There’s no question that humanity is capable of doing good, but as recent weeks have shown, we’re also capable of unthinkable evils — including “killing, kidnapping and torturing innocent civilians,” “mowing down civilians at a festival,” and “butchering children,” to name just a few.

Many have documented that these new AIs often make mistakes. Indeed, in response to a question, Kurzweil admitted that one reason AI gets things wrong is because it is trained on material and information created by humans, and we humans sometimes make mistakes. So he admitted that humans are flawed and that human flaws lead to flawed AI. This seemingly undermines Kurzweil’s entire argument that we should be able to trust AI, as it raises the obvious question:

If AI is going to be based upon human values, but human values can sometimes be corrupted, then can we really trust an AI that’s built to implement human values? After all, humans don’t just make mistakes — they perpetrate moral evil as the current battle against Hamas has shown.

Kurzweil might reassure us that we can fix any deficiencies in the ethical subroutines, as they would always do on Star Trek whenever AI’s went haywire. Perhaps, but who’s going to decide how to “reprogram” the ethics of the computers that Kurzweil promises will run our lives in the future? Even our best intentions often lead to unexpected and unwanted moral outcomes.

Exhibit A, Seattle.

I live in Seattle, a city run by technocrat elites, and their unwise political and moral choices have filled our city with drugs and crime, poverty, and poop — and created an unsafe dystopian nightmare that didn’t have to happen. None of the well-intentioned technocrats who created this hellscape expected it to happen. But as one who suffers daily under the bad fruit of their ethics and politics, they’re the last people I would entrust to program the morality of the AI that will one day rule the world. Indeed, is there any human who should be trusted with such a task?

There’s no question AI will lead to many important human advances. But which prophet are we to trust — Kurzweil, or Isaiah?

If Kurzweil is right that “trusting AI will be like trusting other people,” then those of us who have witnessed the dark side of humanity won’t be able to easily bring ourselves to blindly trust AIs that are trained to emulate “other people” — or any people for that matter. Perhaps the Prophet Isaiah was right after all, and even the most impressive human-made gods will fail you in the end.

Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.