Blind spots unnoticed
On the Y2K bug, medical research, and the big problem with using predictions to avoid trouble
Projections currently in circulation indicate that four giant American tech firms will, altogether, spend $320 billion just this year on computing resources related to artificial intelligence. China-based companies and others are out to spend tens of billions more. Perhaps not since the Y2K crisis has so much money been spent on a single computing goal.
■ There is lots of reason to be optimistic about some of the potential found in neural networks and machine learning, but any time there is such an evident arms race underway, it's worth taking a step back to make sure that sound principles are going to prevail. One such area where there ought to be lots of cause for alarm is in the basic framework being used to build these computing systems.
■ The promise of artificial intelligence, beyond in performing impressive visual stunts, is that it could be used to enhance and improve upon human decision-making. That's why, for instance, it has lots of appeal in medical research. But if, in essence, the premise of artificial intelligence is that it does very well with pattern recognition and predictions, then it is improbable that it will give us especially good anti-worst-case reasoning.
■ It's one thing to be very good at seeing patterns in the data sets that are present. But life is often a matter of avoiding the worst possible outcomes and steering clear of unlikely but awful events. Doing that well requires a combination of moral imagination and a tolerance for improbability. Always making the best decision is less important than being sure to avoid colossal mistakes.
■ But with it emerging that some people are willing to trust artificial intelligence with extremely high-impact decisions, we need to think carefully about the consequences of overconfidence. It's one problem if Google tells people to put glue on their pizzas.
■ It's a much bigger problem if AI is being asked to enact policies (or even write laws) that have enormous effects on people's lives without a due procedural regard for the simple question, "What's the worst that could happen, and how are we insuring prudently against it?"
■ That just isn't the strong suit of a computing philosophy that assumes the only thing standing in the way of getting answers right is a shortage of data. Catastrophes averted rarely show up in the data. That creates an enormous systemic blind spot which we might never expect artificial intelligence to see.