The ultimate kill switch is inside us
On nursing, founder feuds, and the need to figure out how our own brains work before AI breaks us
There is no obvious way to make it profitable to a private-sector firm, but among the most valuable investments society could make in light of the artificial intelligence boom (and the perils, both known and unknown, that go with it) is to commit to a Manhattan Project-like effort to advance the science of psychology. Much of AI is purportedly built on the concepts of neural networks patterned on the human brain, yet we hardly know enough about the function of the brain to know how even to describe how neural networking even works.
■ Most arts and sciences seem to go through a similar series of development phases. It starts with the initial establishment of the discipline, usually under a founding theoretician or school (think Florence Nightingale and the modern practice of nursing). Then comes a juvenile stage, defined by the prominence of individual authors, often endorsing competitive theories (see, for instance, the economic rivalry between the schools of Hayek and Keynes). Then comes an adolescence, in which the second or third generation of experts starts to harmonize or unify the proven aspects of those early theories as new supplementary ideas blossom. Ultimately, most sciences arrive at a stage of maturity in which a fairly broad consensus prevails on the fundamentals and disagreements persist over the frontiers of the science.
■ Meteorology? A mature science. Economics? Probably somewhere in adolescence. AI? Still just a baby -- in which not only are most of the founders still alive, some (like Elon Musk and Sam Altman) are still actively feuding with each other.
■ Psychology still seems like it's in that juvenile phase -- there are still Jungians and Freudians and logotherapists and many other fragmented schools of thought. That fragmentation is on display in fields like business and education, which depend heavily on psychology, but still don't often confidently know what to do with it.
■ The real hazard for us could well become evident if AI science (which is well-funded and full of ambitious researchers with loads of incentives) matures faster than psychology. We have magnificent brains that are the products of millions of years of evolution, but computer processing is such that every human second is like years or even decades to a neural network. If we don't really know ourselves, how will we know where we stand in contrast with computers?
■ There's lots of talk about an AI "kill switch", as well there ought to be. But the ultimate kill switch is for us to know ourselves better than our tools know us. There's less monetary incentive driving toward that goal, and certainly less energy. It would do us well to rectify that gap.