Elmo touches a ticklish subject
On Sesame Street, mental wellness, and why people will bare their souls to a furry red monster but not to a computer
Someone at the Sesame Workshop posted an innocuous comment on a social-media profile operated in the name and voice of the character Elmo: "Elmo is just checking in! How is everybody doing?". In 36 hours, the post garnered 167 million views and 11,000 replies. Obviously, nobody of sound adult mind actually thinks they're talking to a furry red monster when they interact with a Twitter profile of a Sesame Street character. And yet, the huge number of replies to a simple message -- no small number of those replies evidently sincere -- says something interesting about mental wellness.
■ The sincerity and the sadness of some of those replies appears to have instigated the Sesame Workshop team to post a follow-up message with a link to a page of mental health resources for children and adults. Perhaps that is not wholly unexpected; there are always individuals within a large population who need help that they aren't getting. Others need just a friendly word of encouragement.
■ There are those who think that we can use artificial intelligence tools to surrogate for human practitioners as mental-wellness providers. It's certainly possible that there's some extent to which AI can be useful as a tracking device, and maybe even as a diagnostic tool. But it's folly to think that the deeper job can be done by a machine.
■ Clearly, there's something deeper going on behind the "Elmo" episode: People aren't really responding to the Sesame Street character, per se, but they are most certainly responding to the notion that there's a human being who posted the message and (presumably) reading the replies.
■ There just isn't any way that it's possible to make an AI program so good that any significant number of people would still prefer to engage with it for mental wellness support rather than with a human being -- even if that human is a stranger. Are there some people so uncomfortable with interpersonal contact that they might voluntarily turn to a machine? Sure, just as there are those who prefer the company of dolls to real people. But those are extreme cases.
■ Even if it could pass a Turing test, a large language model is still just a computer model. And any ethical system worth its salt certainly owes the patients and clients seeking mental wellness care the transparency to reveal whether their provider is a machine or a living, breathing person. The instinct for human interaction is just so far beyond substitution that it is going to be a defining hard limit on technology.