AI is everywhere, it seems. Newscasters are talking about it, tech companies are hyping it, and students are cheating in English class with it. Some people are worried that their jobs will be replaced by AI, others are scared it’ll take over the whole world. But today I’d like to discuss something else: AI’s potential to change human interaction.
Let’s start by taking a look at the past. New technologies have always modified the way humans interact in some way. The telephone (and economical long-distance calling) enabled friends and family to maintain close relationships, even when one party moved across the country. But it also meant that many conversations formerly conducted in-person were now done via telephone, with the added risk of misunderstanding due to the lack of facial expressions and body language. Technology also slowly replaced human-human interactions with human-machine interactions. The telephone switchboard operator of old was replaced by direct dialing; the cashier is in the process of being replaced by self-checkout. Now, AI opens the potential for replacing a whole new group of human interactions. Some of this might be beneficial. For example, a business can have AI respond to emails from customers who didn’t take the time to read the FAQ first. This saves customer support representatives a good deal of boring, repetitive work—and the customer likely receives a quicker response to boot.
But there’s also the potential for problems. For one, AIs don’t know what the limits of their abilities are; they answer all questions with the same degree of confidence, regardless of how correct they are.1 And sometimes they’re just plain wrong—we all can agree that putting glue on a pizza is a bad idea.2
A more fundamental problem is that AIs lack real-world experience to give meaning to their utterances. As humans, we communicate using words. However, it’s not the words themselves that contain the meaning, it’s our shared context that gives meaning to the words. For example, the word “chair” doesn’t mean much if you’ve never seen an actual chair. Likewise, when we talk about feelings like joy, sadness, anger, or anxiety, those conversations are meaningful precisely because we’ve experienced those emotions to some degree at one time or another. But for an AI, everything is just words. The words have no real meaning—no connection to anything real. All that exists is their usual forms of usage, which produce the text that is semi-believable. An striking illustration of this concept is the so-called “octopus scenario”:
Imagine that Alice and Bob are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. Alice and Bob start happily typing messages to each other.
Meanwhile, a hyper-intelligent deep-sea octopus—who is unable to visit or observe the two islands—discovers a way to tap into the underwater cable and listen in on Alice and Bob’s conversations. The octopus knows nothing about English initially, but is very good at detecting statistical patterns. Over time, the octopus learns to predict with great accuracy how Bob will respond to each of Alice’s messages. Nonetheless, the octopus has never observed any of the objects Alice and Bob discuss, and thus would be unable to pick the item that corresponds with a given word from a lineup of physical objects.
At some point, the octopus starts feeling lonely. He cuts the underwater cable and inserts himself into the conversation, by pretending to be Bob and replying to Alice’s messages. Can the octopus successfully pose as Bob without making Alice suspicious?
The extent to which the octopus can fool Alice depends on what Alice is trying to talk about. Alice and Bob have spent a lot of time exchanging trivial notes about their daily lives to make the long island evenings more enjoyable. The octopus is able to produce new sentences of the kind Bob used to produce; essentially acting as a chatbot. This is because such conversations have a primarily social function, and do not need to be grounded in anything specific about the real world. It is sufficient to produce text that is internally coherent.
However, one day, Alice faces an emergency. She is suddenly pursued by an angry bear. She grabs a couple of sticks and frantically asks Bob to come up with a way to construct a weapon to defend herself. Of course, the octopus has no idea what Alice “means”. Solving a task like this requires the ability to map accurately between words and real-world entities (as well as reasoning and creative thinking). It is at this point that the octopus would fail to fool Alice.3
When it comes to the hard stuff of life that really matter, AI falls short. However, as noted, it is capable of generating text that emulates human interaction in a way that humans find very convincing. An extreme and famous example of this is the case of Blake Lemone, a Google engineer who in 2022 became convinced that LaMDA, one of Google’s AI models, was in fact sentient.4 Of course, this isn’t remotely the case, but humans are particularly vulnerable to being misled in these kinds of ways. When we try to comprehend what someone is saying, we’re not merely applying our knowledge of vocabulary and grammar to interpret the words; we’re trying to imagine their mental processes and communicative intent. We automatically do the same when we read text produced by a machine, despite the fact that AI has neither mental processes nor communicative intent.5
Herein lies one of AI’s biggest dangers. It’s not that its capabilities are so great, it’s that we’re hard-wired to misinterpret its output as something far more humanlike than it really is. And in today’s world of increasing loneliness, where so many people are aching for something that even slightly resembles human connection, this cognitive bias can push people to settle for a relationship with a machine. One of the unfortunate downsides of capitalism is that there are companies willing to exploit this for profit.6
However, not only do AIs do a very poor job of substituting for human interaction, AI relationships tend to make individuals even more ill-equipped for real human interaction. Companies building AIs are financially incentivised to build them in such a way that the end user is always satisfied with the interaction. Of course, real-world relationships don’t work that way. If you heap enough abuse on any human, eventually they’ll reach their limit. AIs have no real emotions, therefore, they have no such limit. They’ll simply continue to apologize and try to please you. Real human relationships are messy; there needs to be giving and taking on both sides.
Now, more than ever, we need people to rise to the challenge and do the hard work of cultivating real relationships, even with socially awkward and difficult to love members of our society. AIs will never share in the depths of human emotion, will never “rejoice with those who rejoice, and weep with those who weep” (Romans 12:15). AIs have never experienced the deep wrangling with the darkness of our broken, fallen, human nature; the overwhelming relief that comes from being a recipient of God’s boundless grace; the life-transforming power of the Holy Spirit; or the joy that comes from serving Him. But we have. We can share a deep connection with our fellow human beings that AIs will never truly replicate. But so often, our conversational bandwidth is dominated by mere banalities. Small talk has its place, but we can do better. The world is full of people longing for something deeper—will you do your part?
Footnotes
- Carissa Véliz, “What Socrates Can Teach Us About AI,” Time, August 1, 2023, https://time.com/6299631/what-socrates-can-teach-us-about-ai/.
- Liv McMahon, “Glue Pizza and Eat Rocks: Google AI Search Errors Go Viral,” BBC, May 24, 2024, https://www.bbc.com/news/articles/cd11gzejgz4o.
- Adapted from Emily M. Bender and Alexander Koller, “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ed. Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (n.p.: Association for Computational Linguistics, 2020), 5185–5198, https://doi.org/10.18653/v1/2020.acl-main.463. Used under CC BY 4.0.
- Richard Luscombe, “Google Engineer Put on Leave After Saying AI Chatbot Has Become Sentient,” Guardian, June 12, 2022, https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.
- Emily M. Bender, “Human-Like Programs Abuse Our Empathy—Even Google Engineers Aren’t Immune,” Guardian, June 14, 2022, https://www.theguardian.com/commentisfree/2022/jun/14/human-like-programs-abuse-our-empathy-even-google-engineers-arent-immune.
- Andrew R. Chow, “AI-Human Romances Are Flourishing—And This Is Just the Beginning,” Time, February 23, 2023, https://time.com/6257790/ai-chatbots-love/.