Digital Ouija
On "Three's Company", the Mandela effect, and the two very different paths that we could take towards digitally preserving and accessing the essence of a person after they die
When actor Suzanne Somers died in October 2023, the artificial-intelligence boom had not really begun in earnest. ChatGPT was still quite new, and Google had not yet released Gemini. But her husband, Alan Hamel, says they had already discussed creating a digital twin for Somers after her Earthly life was over.
■ Hamel seems to want the Somers chatbot to become just as famous as the real person, and possibly even harder-working. “Suzanne AI”, as he called it to People Magazine, is supposed to be rolled out soon to her website, where Hamel says fans “can come and just hang out with her”.
■ It has been fairly obvious that we were going to land here someday: “Talking” to the dead has been an enduring pastime (see: Ouija boards), and the idea of talking to a computer is at least as old as “Star Trek”. There are ways this can be done well, and ways it can be done harmfully.
■ A program could be trained on an individual’s written and recorded output and subsequently queried, just like one might enter a question into a search engine. For people with lots of writing or recordings to their credit, the resulting database could be quite useful.
■ If the machine furnishes a response that clearly puts the answer at arm’s length, then that’s probably a net good. Put another way, if the answer could be voiced by any radio or television news anchor, then it’s probably just fine. We could call such a tool a “personality engine”, in the sense that it acts like a search engine for an individual’s personality.
■ But if a user insists on hearing an answer to their prompt in a synthesized version of a loved one’s voice, or yet further in a virtual video form (not unlike Max Headroom), then there are really grave risks of fundamentally confusing the wiring of human memory in ways that can’t be repaired. Memory is already fragile at best and easily corrupted at worst, and introducing convincing synthetic remakes of deceased people creates a wildly fraught ethical trap -- especially when AI hallucinations are already a known hazard.
■ Once human memory has been corrupted, there’s really no clear way of repairing it. We already live with unreliable eyewitness accounts and the Mandela effect. It can only turn much worse if “Suzanne AI” becomes one of a cast of thousands or even millions, filling our present with synthetic revivals of the past.



