The Good and the Bad of Artificial Intelligence: 10 Need-to-Know Facts

The Good and the Bad of Artificial Intelligence: 10 Need-to-Know Facts

Max Tegmark is a professor of physics at MIT and the co-founder of the Future of Life Institute. Tegmark has been featured in dozens of science documentaries. His most recent book is Life 3.0. Here, Tegmark explores the top 10 things we need to know about Artificial Intelligence as AI becomes a reality we will all need to deal with.

1. AI is coming for your job. To safeguard your career, go for jobs that machines are bad at – involving people, unpredictability and creativity. Avoid careers about to get automated away, involving repetitive or structured actions in a predictable setting. Telemarketers, warehouse workers, cashiers, train operators, bakers or line cooks. Drivers of trucks, buses, taxis and Uber/Lyft cars are likely to follow soon. There are many more professions (including paralegals, credit analysts, loan officers, bookkeepers and tax accountants) that, although they aren’t on the endangered list for full extinction, are getting most of their tasks automated and therefore demand much fewer humans.

2. Unemployment can be a lifelong vacation. AI progress can produce either a luxurious leisure society for all or unprecedented misery for an unemployable majority, depending on how the AI-produced wealth is taxed and shared.

3. Killer robots aren’t fiction: We’re on the verge of starting an out-of-control arms race in AI-controlled weapons, which can weaken today’s powerful nations by making cheap and convenient assassination machines available to everybody with a full wallet and an axe to grind, including terrorist groups. Leading AI researchers oppose this and want an international AI arms control treaty.

4. Machines don’t have an IQ. Intelligence is the ability to accomplish complex goals. It can’t be quantified by a single number such as an IQ, since different organisms and machines are good at different things. To see this, imagine how you’d react if someone made the absurd claim that the ability to accomplish Olympic-level athletic feats could be quantified by a single number called the “athletic quotient”, or “AQ” for short, so that the Olympian with the highest AQ would win the gold medals in all the sports.

5. AI is getting broader: Today’s AI has mainly narrow intelligence: ability to accomplish a narrow set of goals such as playing chess or driving, sometimes better than humans. In contrast, humans have general intelligence: ability to accomplish virtually any goal, including learning. The holy grail of AI search is to develop Artificial General Intelligence (AGI): the ability to accomplish any intellectual task at least as well as humans. Many leading AI researchers think we’re only decades away from AGI.

6. AI might leave us far behind: AGI might rapidly lead to superintelligence, as the British mathematician Irving J. Good explained in 1965: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an `intelligence explosion’, and the intelligence of man would be left far behind.  Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

7. We’re nowhere near the limits of computation: Computing has gotten a whopping million million million times cheaper since my grandmothers were born. If everything got that much cheaper, then a hundredth of a cent would enable you to buy all goods and services produced on Earth this year. Moore’s Law governs how cheaply we can compute by moving electrons around on two dimensional silicon wafers, and once this plateaus, there are many other hardware solutions we can try — for example using three-dimensional circuits and using something other than electrons to do our bidding. We’re still a million billion billion billion times below the ultimate limits on computation from the laws of physics.

8. AI can help humanity flourish: Since everything we love about civilization is the product of intelligence, amplifying our own intelligence with AI has the potential to help life flourish like never before, solving our thorniest problems, from disease to climate, justice and poverty.

9. AI poses risks: The Hollywood-fueled fear of machines turning conscious and evil is a red herring. The real worry isn’t malevolence, but competence. Superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

10. We need AI safety reseach: To ensure that AI remains beneficial as its impact on society grows, more AI safety research is urgently needed.   For example, how do we transform today’s buggy and hackable computers into robust AI systems that we can really trust?  How to make machines learn, adopt and retain our goals? These are challenging questions that may take decades to answer, so we should start a crash research effort now to ensure that we have the answers when we need them. But relative to the billions being spent on making AI more powerful, governments have so far earmarked close to no funding for AI safety research.

Images Powered by Shutterstock