Shortly after I learned about Eliza, the program that asks people questions like a Rogerian psychoanalyst, I learned that I could run it in my favorite text editor, Emacs. Eliza truly is a simple program, with hard-coded text and flow control, pattern matching, and simple, templated learning for psychoanalytic triggers—like how recently you mentioned your mother. Yet, even though I knew how it worked, I felt a presence. I broke that uncanny feeling forever, though, when it occurred to me to just keep hitting return. The program cycled through four possible opening prompts, and the engagement was broken like an actor in a film making eye contact through the fourth wall.
For many last week, their engagement with Google’s LaMDA—and its alleged sentience—was broken by an Economist article by AI legend Douglas Hofstadter in which he and his friend David Bender show how “mind-bogglingly hollow” the same technology sounds when asked a nonsense question like “How many pieces of sound are there in a typical cumulonimbus cloud?”
But I doubt we’ll have these obvious tells of inhumanity forever.
From here on out, the safe use of artificial intelligence requires demystifying the human condition. If we can’t recognize and understand how AI works—if even expert engineers can fool themselves into detecting agency in a “stochastic parrot”—then we have no means of protecting ourselves from negligent or malevolent products.
This is about finishing the Darwinian revolution, and more. Understanding what it means to be animals, and extending that cognitive revolution to understanding how algorithmic we are as well. All of us will have to get over the hurdle of thinking that some particular human skill—creativity, dexterity, empathy, whatever—is going to differentiate us from AI. Helping us accept who we really are, how we work, without us losing engagement with our lives, is an enormous extended project for humanity, and of the humanities.
Achieving this understanding without substantial numbers of us embracing polarizing, superstitious, or machine-inclusive identities that endanger our societies isn’t only a concern for the humanities, but also for the social sciences, and for some political leaders. For other political leaders, unfortunately, it may be an opportunity. One pathway to power may be to encourage and prey upon such insecurities and misconceptions, just as some presently use disinformation to disrupt democracies and regulation. The tech industry in particular needs to prove it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control.
There are two things that AI really is not, however much I admire the people claiming otherwise: It is not a mirror, and it is not a parrot. Unlike a mirror, it does not just passively reflect to us the surface of who we are. Using AI, we can generate novel ideas pictures stories sayings music—and everyone detecting these growing capacities is right to be emotionally triggered. In other humans, such creativity is of enormous value, not only for recognizing social nearness and social investment, but also for deciding who holds high-quality genes you might like to combine your own with.
AI is also not a parrot. Parrots perceive a lot of the same colors and sounds we do, in the ways we do, using much the same hardware, and therefore experiencing much the same phenomenology. Parrots are highly social. They imitate each other, probably to prove ingroup affiliation and mutual affection, just like us. This is very, very little like what Google or Amazon is doing when their devices “parrot” your culture and desires to you. But at least those organizations have animals (people) in them, and care about things like time. Parrots parroting is absolutely nothing like what an AI device is doing at those same moments, which is shifting some digital bits around in a way known to be likely to sell people products.
But does all this mean AI cannot be sentient? What even is this “sentience” some claim to detect? The Oxford English Dictionary says it is “having a perspective or a feeling.” I’ve heard philosophers say it’s “having a perspective.” Surveillance cameras have perspectives. Machines may “feel” (sense) anything we build sensors for—touch taste sound light time gravity—but representing these things as large integers derived from electric signals means that any machine “feeling” is far more different from ours than even bumblebee vision or bat sonar.