Malice or ignorance?
On pets, chatbots, and what it means when someone asks you to stop wanting to be fully human
Out on the fringes of technology applications, it can be difficult to tell who’s acting out of ignorant stupidity and who’s acting out of real nefarious intent. While in life, it is generally prudent to follow Hanlon’s Razor (”Never ascribe to malice that which can be equally explained by stupidity”), cutting-edge technological applications beg us to take a second look at malice. There tends to be such overwhelming belief in first-mover advantage that clever malice cloaks itself in the costume of stupidity, hoping to get away with it.
■ Take the rise of artificial-intelligence chatbots as purported substitutes for human friends. There are ads out in the wild, hoping to ensnare people in engagement with computer programs that are very good at predicting the next word in a sentence. And they’re doing it with the aid of language like, “Less needy than a dog but just as curious.”
■ That’s the phrase that triggers the question: Stupid or malicious? One of the main things we know about friendship is that it’s really important to feel needed by others. It means that you and your actions are important to the world!
■ That a dog needs you is a feature of the relationship, not a bug. To have a dog means that another being depends on you -- it needs you. That is a significant part of the pet relationship for a whole lot of people: Being needed is psychologically meaningful.
■ Perhaps the marketing writers for apps like “Friend” know their Viktor Frankl (”The more one forgets himself -- by giving himself to a cause to serve or another person to love -- the more human he is and the more he actualizes himself”), and perhaps they do not. But in framing “friendship” as something devoid of interdependence, they betray either an extremely alarming ignorance of psychology, or an even more alarming choice to actively undermine it.
■ These “AI friend” apps are selling nothing more than chatty Tamagotchi, minus the psychic reward. There certainly are defensible use cases for automated response systems, as long as the humans involved are able to understand that they are using tools and not making “friends” with a ghost inside the computer. But when a technology expressly invites people to let down their defenses and short-circuit the meaning of normal human interaction, then it’s time to ask aloud whether something more sinister is afoot.


