The Arrival of Artificial Intelligence

The Arrival of Artificial Intelligence

Chris Dixon opened a truly wonderful piece in the Atlantic entitled How Aristotle Created the Computer like this:

Dixon goes on to describe the creation of Boolean logic (which has only two variables: TRUE and FALSE, represented as 1 and 0 respectively), and the insight by Claude E. Shannon that those two variables could be represented by a circuit, which itself has only two states: open and closed. Dixon writes:

Dixon is being modest: the distinction may be obvious to computer scientists, but it is precisely the clear articulation of said distinction that undergirds the Dixon’s remarkable essay; obviously “computers” as popularly conceptualized were not invented by Aristotle, but he created the means by which they would work (or, more accurately, set humanity down that path).

Moreover, you could characterize Shannon’s insight in the opposite direction: distinguishing the logical and the physical layers depends on the realization that they can be two pieces of a whole. That is, Shannon identified how the logical and the physical could be fused into what we now know as a computer.

To that end, the dramatic improvement in the physical design of circuits (first and foremost the invention of the transistor and the subsequent application of Moore’s Law) by definition meant a dramatic increase in the speed with which logic could be applied. Or, to put it in human terms, how quickly computers could think.

Earlier this week U.S. Treasury Secretary Steve Mnuchin, in the words of Dan Primack, “breezily dismissed the notion that AI and machine learning will soon replace wide swaths of workers, saying that ‘it’s not even on our radar screen’ because it’s an issue that is ’50 or 100 years’ away.”

Naturally most of the tech industry was aghast: doesn’t Mnuchin read the seemingly endless announcement of artificial intelligence initiatives and startups on Techcrunch?

Then again, maybe Mnuchin’s view makes more sense than you might think; just read this piece by Maureen Dowd in Vanity Fair entitled Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse:

The rest of the article is pre-occupied with the question of what might happen if computers are smarter than humans; Dowd quotes Stuart Russell to explain why she is documenting the debate now:

50 years: that’s the same timeline as Mnuchin; perhaps he is worried about the same things as Elon Musk? And, frankly, should the Treasury Secretary concern himself with such things?

The problem is obvious: it’s not clear what “artificial intelligence” means.

Artificial intelligence is very difficult to define for a few reasons. First, there are two types of artificial intelligence: the artificial intelligence described in that Vanity Fair article is Artificial General Intelligence, that is, a computer capable of doing anything a human can. That is in contrast to Artificial Narrow Intelligence, in which a computer does what a human can do, but only within narrow bounds. For example, specialized AI can play chess, while a different specialized AI can play Go.

What is kind of amusing — and telling — is that as John McCarthy, who invented the name “Artificial Intelligence”, noted, the definition of specialized AI is changing all of the time. Specifically, once a task formerly thought to characterize artificial intelligence becomes routine — like the aforementioned chess-playing, or Go, or a myriad of other taken-for-granted computer abilities — we no longer call it artificial intelligence.

That makes it especially hard to tell where computers end and artificial intelligence begins. After all, accounting used to be done by hand:

Within a decade this picture was obsolete, replaced by an IBM mainframe. A computer was doing what a human could do, albeit within narrow bounds. Was it artificial intelligence?

In fact, we already have a better word for this kind of innovation: technology. Technology, to use Merriam-Webster’s definition, is “the practical application of knowledge especially in a particular area.” The story of technology is the story of humanity: the ability to control fire, the wheel, clubs for fighting — all are technology. All transformed the human race, thanks to our ability to learn and transmit knowledge; once one human could control fire, it was only a matter of time until all humans could.

It is technology that transformed homo sapiens from hunter-gatherers to farmers, and it was technology that transformed farming such that an ever smaller percentage of the population could support the rest. Many millennia later, it was technology that led to the creation of tools like the flying shuttle, which doubled the output of weavers, driving up the demand for spinners, which drove its own innovation like the roller spinning frame, powered by water. For the first time humans were leveraging non-human and non-animal forms of energy to drive their technological inventions, setting off the industrial revolution.

You can see the parallels between the industrial revolution and the invention of the computer: the former brought external energy to bear in a systematic way on physical activities formerly done by humans; the latter brings external energy to bear in a systematic way on mental activities formerly done by humans. Recall the analogy made by Steve Jobs:

I remember reading an article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet Earth, how many kilocalories did they expend to get from point A to point B. And the condor came in at the top of the list, it surpassed everything else, and humans came in about a third of the way down the list, which was not such a great showing for the crown of creation. But somebody there had the imagination to test the efficiency of a human riding a bicycle. The human riding a bicycle blew away the condor, all the way off the top of the list, and it it made a really big impression on me that we humans are tool builders, and we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me, a computer has always been a bicycle of the mind.

In short, while Dixon traced the logic of computers back to Aristotle, the very idea of technology — of which, without question, computers are a part — goes back even further. Creating tools that do what we could do ourselves, but better and more efficiently, is what makes us human.

That definition, you’ll note, is remarkably similar to that of artificial intelligence; indeed, it’s tempting to argue that artificial intelligence, at least the narrow variety, is simply technology by a different name. Just as we designed the cotton gin, so we designed accounting software, and automated manufacturing. And, in fact, those are all related: all involved overt design, in which a human anticipated the functionality and built a machine that could execute that functionality on a repeatable basis.

That, though, is why today is different.

Recall that while logic was developed over thousands of years, it was only part way through the 20th century that said logic was fused with physical circuits. Once that happened the application of that logic progressed unbelievably quickly.

Technology, meanwhile, has been developed even longer than logic has. However, just as the application of logic was long bound by the human mind, the development of technology has had the same limitations, and that includes the first half-century of the computer era. Accounting software is in the same genre as the spinning frame: deliberately designed by humans to solve a specific problem.

Machine learning is different. Now, instead of humans designing algorithms to be executed by a computer, the computer is designing the algorithms. It is still Artificial Narrow Intelligence — the computer is bound by the data and goal given to it by humans — but machine learning is, in my mind, meaningly different from what has come before. Just as Shannon fused the physical with the logical to make the computer, machine learning fuses the development of tools with computers themselves to make (narrow) artificial intelligence.

This is not to overhype machine learning: the applications are still highly bound and often worse than human-designed systems, and we are far, far away from Artificial General Intelligence. It seems clear to me, though, that we are firmly in Artificial Narrow Intelligence territory: the truth is that humans have made machines to replace their own labor from the beginning of time; it is only now that the machines are creating themselves, at least to a degree.

The reason this matters is that pure technology is hard enough to manage: the price we pay for technology progress is all of the humans that are no longer necessary. To that end, I don’t think it was a coincidence that the industrial revolution was followed by three centuries of war.

What then are the implications of machine learning, that is, the (relatively speaking) fantastically fast creation of algorithms that can replace a huge number of jobs that generate data (data being the key ingredient to creating said algorithms)? To date automation has displaced blue collar workers; are we prepared for machine learning to displace huge numbers of white collars ones?

This is why Mnuchin’s comment was so disturbing; it also, though, is why the obsession of so many technologists with Artificial General Intelligence is just as frustrating. I get the worry that computers far more intelligent than any human will kill us all; more, though, should be concerned about the imminent creation of a world that makes huge swathes of people redundant. How many will care if artificial intelligence destroys life if it has already destroyed meaning?

Images Powered by Shutterstock