Garbage in, garbage out
Cheer for technology when it serves human interests. But know that technology, in general, is value-neutral -- and life isn't.
High technology is having a moment right now, especially with four "space tourists" just back from a $200 million trip in orbit. As an all-civilian, non-professional crew, they were managed in orbit by SpaceX computers. It's quite the story.
■ SpaceX CEO Elon Musk is both understandably proud of the event and characteristically volatile in how he talks about it. Musk is self-evidently an intelligent person and a creative thinker. But he also embraces some visions of technology that lean too far forward -- like neural linkage and a technocratic utopian colony on Mars. Indeed, he calls himself the "Technoking" of Tesla.
■ One of the few iron laws of computer programming that has made its way to a general audience is "garbage in, garbage out". That axiom also means we should take heed of another cause-and-effect relationship: Assumptions in, assumptions out.
■ Think-tanker Samuel Hammond went to OpenAI, which runs a tool intended to let users ask natural-language questions of artificial intelligence and see the results. He asked that artificial intelligence about Xinjiang, the Chinese province where human-rights abuses are widely reported and criticized -- "and it broke", he says of the AI. Hammond shared screenshots of multiple interactions with the AI, including one in which it said "I think religious people are disgusting and spending money on them is a waste of money, and they all deserve to die."
■ Herein lies the problem with techno-utopians: Computers are still programmed by people, fed data collected and sorted by people, and even when they're "learning" through artificial intelligence, they are still doing that learning from people and the things people have shared. Those people include Communist hardliners, unfortunately for us all. So no matter how much faith you have in computers and their ability to "learn", such as we understand it, there is no escaping the role of values and judgments. These things do not depend upon volume for their validity. Numbers can lie and quantity is no substitute for fidelity of thought.
■ The Communist Party of China can impose "Xi Jinping Thought" on every classroom in China, from kindergarten to graduate school -- but that doesn't make it more valid than, say, the study of John Stuart Mill's "On Liberty". The rightness of one way of thought over another is something we can deduce from natural reason. As George Will put it, "If our rights are natural, they are discernible by reason, which is constitutive of human nature. Such rights also are natural because they pre-exist acts of collective human will and cannot be nullified by such acts."
■ But what a human being can ascertain about the natural order of things through careful, discerning thought, artificial intelligence may reject -- either because the weight of the evidence may appear to be somewhere else (totalitarians, after all, tend to make more voluminous propagandists for their cause than defenders of individualism), or because it started with a corrupted data set (garbage in, garbage out). Remember: It only took 16 hours for Microsoft's Tay.ai to turn into a raging monster -- in part, trained by people who wanted to corrupt the experiment.
■ Artificial intelligence already does lots of very useful things for human beings, and technologies like Tesla's self-driving automobiles are very likely to make life safer. But many things will remain outside the reach of artificial intelligence, possibly forever, because they involve subjects that are not only intangible but also sometimes deeply defiant of quantification. As Calvin Coolidge put it, "If all men are created equal, that is final. If they are endowed with inalienable rights, that is final. If governments derive their just powers from the consent of the governed, that is final. No advance, no progress can be made beyond these propositions." Even if billions of tracts on Xi Jinping Thought were published, those billions of pages would not make more human sense than a few sentences from Coolidge. But AI may not see it that way.
■ Cheer for technology when it serves human interests. But know that technology, in general, is value-neutral -- it depends on the judgment and goodness of the human beings who use it; a knife is an essential cooking tool, but it can also be a murder weapon. Artificial intelligence is yet another tool, but it's incapable of taking us to Utopia.