Your private information
On iPhones, ChatGPT, and the turning point in AI development that might allow the privacy-minded to calm down a bit
Elon Musk is threatening to ban iPhones from his companies if Apple goes through with plans to integrate OpenAI's ChatGPT into the best-known Apple devices. It is one of many prominent artificial-intelligence-related headlines capturing a disproportionate share of public and news media attention right now.
■ Musk points at privacy concerns as the root of his reaction, and his ventures do indeed depend heavily upon proprietary information and processes. Apple says that almost everything new that it intends to enable in its devices will be computed on the device itself, essentially answering those privacy concerns from the very start. If little or nothing is submitted to or processed by cloud computing, then the device might arguably be seen as little more than a private extension of the user's own mind.
■ But what the controversy cannot really address is a more fundamental question: What is the ultimate calling for these technologies? We call the whole basket of them "artificial intelligence tools", but to a considerable degree, the large-language models aren't really generating new ideas. In many cases, they are being used to draw useful connections -- to some extent, even to synthesize the questions that human users ought to be asking.
■ Ultimately, though, as with human creators, the lasting merit will be found in generating thoroughly novel ideas. We will know something new and different is happening when artificial intelligence can come up with an eighth basic narrative plot that hasn't been previously explored. Until that time, it will mostly function to reassemble the information that humans have already cast into the world -- which will be of little comfort to those who, like Musk, believe their information is too valuable to let leak.