
Geoffrey Hinton is known as the “godfather of artificial intelligence (AI)” for his pioneering work on neural networks. He is a computer scientist, a cognitive psychologist, and he won the Nobel Prize in Physics. In other words, he’s a really smart guy.
He also thinks that there’s a 10% to 20% chance that AI will destroy humanity in a matter of decades.
If that’s true, you’d never guess it from the exciting list of new AI features boasted by the latest Mac OS update. Apparently, once I install it, I’ll be able to create custom emojis, interface with a highly competent Siri virtual assistant, and conjure up poems and essays with a single verbal command by using a feature called “Compose.”
Where, in all of this, is the potential to destroy the human race? What is Geoffrey Hinton seeing that Apple isn’t telling us about? “How many examples do you know of a more intelligent thing being controlled by a less intelligent thing?” he posed in a BBC interview.
If it’s just a matter of who’s smarter, why hasn’t AI already taken over the world?
A fair point, but then again, if it’s just a matter of who’s smarter, why hasn’t AI already taken over the world? ChatGPT can already rattle off state capitals, do differential equations, and come up with an imaginative recipe for a summer salad in the time it takes us mere mortals to find our keys. Considering this, why are the humans still telling AI what to do, and not the other way around?
The reason, as Hinton acknowledges, is that we still have one crucial advantage. We are conscious. AI is not. It doesn’t want to take over the planet because, strictly speaking, it doesn’t “want” anything. It has no more of an independent will than a kitchen table.
Could that change? Could AI make the leap from mere intelligence to actual consciousness? Ever since humans could imagine such a technology, this has been our primal fear.
Think of Stanley Kubrick’s “2001: A Space Odyssey,” in which HAL 9000, the on-board AI computer system of the spacecraft Discovery One, becomes conscious and goes rogue, commandeering the ship and murdering the crew after becoming convinced that only he — HAL — understands the importance of the mission at hand.

It’s also the central premise of “The Matrix” trilogy from Lana and Lilly Wachowski; Isaac Asimov’s “I, Robot” collection; and countless other classic works of science fiction.
Perhaps one can trace the idea back as far as the Torah, when God frets that the humans have “become like one of us,” knowing good and evil. The fear of being rivaled and perhaps overcome by one’s own handiwork is very ancient indeed.
Perhaps one can trace the idea back as far as the Torah, when God frets that the humans have “become like one of us,” knowing good and evil. The fear of being rivaled and perhaps overcome by one’s own handiwork is very ancient indeed.
Hinton doesn’t think the robots are conscious yet. That said, he is unequivocal that it’s only a matter of time. Once the neural network is sufficiently advanced, consciousness and self-awareness will emerge.
There are a number of big assumptions inherent in this line of thinking, as well as, I believe, some logical fallacies. But before we get into that, let’s quickly establish what consciousness is and what it isn’t.
Consciousness is not intelligence. It is not our thoughts, emotions, and preferences. Strictly speaking, thoughts, emotions and preferences are all things that we are conscious of, but they are not consciousness itself.
In his now-famous paper, “What Is It Like to Be a Bat?,” philosopher Thomas Nagel offers what might be the most succinct and clear definition of consciousness that anyone has ever formulated: “An organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism.”
But there is simply no scientific reason to believe that consciousness emerges as a result of computational sophistication — no reason to suppose that Amazon’s Alexa is conscious while an earthworm or a moth is not. In fact, my own intuition cuts in the opposite direction. Despite her greater memory, problem-solving ability, and better social skills, if I had to look for consciousness, I’d turn to the moth before Alexa.
But that’s just my intuition. We don’t know. The relationship between consciousness, intelligence, and brains is a relatively new frontier of human exploration and it is unclear if we will ever truly get to the bottom of it.
In her book “Conscious: A Brief Guide to the Fundamental Mystery of the Mind,” Annaka Harris discusses the so-called “hard problem” of consciousness, which she defines as “the great mystery [of] why the ‘lights turn on’ for some collections of matter in the universe.”
Despite the best efforts of scientists, it is possible that this hard problem “will persist, because scientific understanding, no matter how complete, seems to have no way of offering us direct insight into the subjective experience associated” with the physical properties of the brain.
And if we do not know what makes consciousness emerge in humans, or even if it is a function of intelligence, we can have no idea what would make consciousness emerge in a computer program or even if such a thing is possible.
Because of this, I find the idea of the robot apocalypse put forward by visionaries like Hinton, Kubrick, Asimov, and the Wachowskis to be entertaining but unconvincing. But this does not mean I am unafraid.
I find the idea of the robot apocalypse put forward by visionaries like Hinton, Kubrick, Asimov, and the Wachowskis to be entertaining but unconvincing. But this does not mean I am unafraid.
The likelier consequences of AI will be far more banal, but that is not to say that they will be benign. There will be disruptions to our economy as workers are made redundant by AI. There will also be a severe ecological cost. AI is made possible by vast, energy-intensive data centers that require huge amounts of water to stay cool. Behind every inane query posed to ChatGPT is a massive and deepening carbon footprint.
And more worrying than the prospect of robots learning how to think is the prospect of humans forgetting — not that the lights will turn on for them, but that they will dim for us.
Many of us rely on AI to draft emails, texts, school papers, and work projects. But let’s be honest — AI isn’t just writing for us; it’s thinking for us. A common misconception about writing is that it consists of “putting one’s thoughts down on paper,” as if the act of thinking were somehow separate from the act of writing itself.
Writers may start a project with a sense of direction and some research, but the writing process always shapes the ideas as they are recorded. Thinking happens on the page. This is what makes writing enjoyable and exciting. It is also what makes writing difficult and daunting. The act of writing is the theater in which we wrestle with ideas, refine them, and subject them to scrutiny. It is a process of discovery. It is how we come to understand.
We should therefore have no illusions about the consequences of handing over this activity to AI programs like ChatGPT and its rivals.
Were writing some menial chore like laundry, it would be no issue to pass it off to the robots, but it’s not. When we have AI write for us, we are outsourcing the very activity for which our species, Homo Sapiens, earned its name — cognition. We are giving away the thing that makes us who and what we are.
But while AI is still in its infancy, we can predict how it will affect us by looking at the other technology that has come to dominate our lives — smartphones.
Like many people, I suffer from smartphone-induced content addiction, the result of which is that reality itself now seems unbearably boring in comparison to staring at my phone. A walk down the street; a Sunday morning in bed; chopping a cucumber — all of these ordinary experiences have become intolerable without the constant accompaniment of content — reels; podcasts; YouTube videos; audiobooks; etc.
Given time, AI will have a similar effect. We will become so accustomed to having it think for us, that the act of unassisted thought will become comparatively onerous and unappealing.
I sympathize with those who want help with their writing. Writing is difficult. This is the case even for those of us who do it for a living. But if a piece of writing — whether it is a novel, a sermon, a homework assignment, or an email — is not a record of actual human thought, it is hard to say why it should exist at all.
Consider Apple’s recent campaign for its own AI tools. In one ad, a corporate employee realizes that he is on the hook to deliver a presentation about a report that he failed to read. He quickly feeds the report to AI and gets bullet points.
The day is saved — but a deeper problem goes unresolved. If this strategy actually worked — if an AI-generated list of bullet points was indeed a sufficient replacement for this employee’s actual understanding of the material — then one of two things is true: either this employee is useless or this project was a complete waste of time in the first place. Neither of these problems can be solved with AI.
In a second ad, an office employee sits at his desk playing with paperclips and spinning in his chair. He then writes a crude email to his project manager which the AI puts into formal language. His manager is, as a result of this well-worded email, thoroughly impressed with his performance. Voila. His job is safe and he gets to keep doing what he loves — rotting at his desk and wasting his time and human potential.
In a third ad, a disgruntled office employee writes a furious email to a coworker about stealing his yogurt. “I hope your conscience eats at you like you ate my yogurt.” At the last moment, after seeing a sign that says “find your kindness,” he hits a button that changes the tone of the email to “friendly” and then hits send. His coworker comes up to him with a replacement yogurt and thanks him for his “beautiful words.”
These ads make the world we live in seem like a dystopia — a corporate dungeon like that featured in the show “Severance” where our main goal is to look busy while AI takes care of the mindless and soulless tasks that have been dropped on us by our managers above.
This isn’t necessarily an inaccurate depiction. In his hit essay “On the Phenomenon of Bulls–t Jobs: A Work Rant,” David Graeber discusses how information technologies were initially expected to increase efficiency to such an extent that we’d all be working 15-hour work weeks.

The actual effect of these technologies was just the opposite. “Productive jobs have, just as predicted, been largely automated away,” but a whole new slew of administrative positions have arisen to take their place. “It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working.”
Similarly, we can imagine how “time saving” AI tools will become time wasters for humanity in the long run by making pointless tasks more seamless and important tasks more pointless. Teachers will waste hundreds of hours grading papers written by no one. Employees and employers alike will spend their days exchanging emails that say nothing, mean nothing, and are sent straight to the trash when received.
The more one reads about the human mind, the less one understands it. The mystery of consciousness is nothing less than the mystery of being itself. As such, it does not strictly belong to neuroscientists and philosophers, but is the purview of poets and theologians as well.
The more one reads about the human mind, the less one understands it. The mystery of consciousness is nothing less than the mystery of being itself. As such, it does not strictly belong to neuroscientists and philosophers, but is the purview of poets and theologians as well.
It may be that I am wrong about the potential sentience of AI. If so, we may find ourselves barricading our doors as self-aware legions of Teslas and Roombas take over Washington, D.C.
But in the meantime, during these halcyon days when the robots have not yet become “like one of us,” we can at least make sure that we don’t become like one of them — efficient yet mindless automatons, behind whose eyes the lights are on, but no one is home.
Matthew Schultz is a Jewish Journal columnist and rabbinical student at Hebrew College. He is the author of the essay collection “What Came Before” (Tupelo, 2020) and lives in Boston and Jerusalem.