Artificial Intelligence: American Attitudes and Trends

Artificial Intelligence: American Attitudes and Trends

Artificial Intelligence: American Attitudes and Trends
Artificial Intelligence: American Attitudes and Trends
Baobao Zhang and Allan Dafoe
Center for the Governance of AI, Future of Humanity Institute, University of Oxford
January 2019
1 Executive summary
Advances in artificial intelligence (AI) 1 could impact nearly all aspects of society: the labor market, transportation, healthcare, education, and national security. AI’s effects may be profoundly positive, but the technology entails risks and disruptions that warrant attention. While technologists and policymakers have begun to discuss AI and applications of machine learning more frequently, public opinion has not shaped much of these conversations. In the U.S., public sentiments have shaped many policy debates, including those about immigration, free trade, international conflicts, and climate change mitigation. As in these other policy domains, we expect the public to become more influential over time. It is thus vital to have a better understanding of how the public thinks about AI and the governance of AI. Such understanding is essential to crafting informed policy and identifying opportunities to educate the public about AI’s character, benefits, and risks.
In this report, we present the results from an extensive look at the American public’s attitudes toward AI and AI governance. As the study of the public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI. However, our findings raise more questions than they answer; they are more suggestive than conclusive. Accordingly, we recommend caution in interpreting the results; we confine ourselves to primarily reporting the results. More work is needed to gain a deeper understanding of public opinion toward AI.
Supported by a grant from the Ethics and Governance of AI Fund, we intend to conduct more extensive and intensive surveys in the coming years, including of residents in Europe, China, and other countries. We welcome collaborators, especially experts on particular policy domains, on future surveys. Survey inquiries can be emailed to surveys@governance.ai .
This report is based on findings from a nationally representative survey conducted by the Center for the Governance of AI , housed at the Future of Humanity Institute, University of Oxford, using the survey firm YouGov. The survey was conducted between June 6 and 14, 2018, with a total of 2,000 American adults (18+) completing the survey. The analysis of this survey was pre-registered on the Open Science Framework . Appendix A provides further details regarding the data collection and analysis process.
1.1 Select results
Below we highlight some results from our survey 2 :
Americans express mixed support for the development of AI . After reading a short explanation, a substantial minority (41%) somewhat support or strongly support the development of AI, while a smaller minority (22%) somewhat or strongly opposes it.
Demographic characteristics account for substantial variation in support for developing AI . Substantially more support for developing AI is expressed by college graduates (57%) than those with high school or less education (29%); by those with larger reported household incomes, such as those earning over $100,000 annually (59%), than those earning less than $30,000 (33%); by those with computer science or programming experience (58%) than those without (31%); by men (47%) than women (35%). These differences are not easily explained away by other characteristics (they are robust to our multiple regression).
The overwhelming majority of Americans (82%) believe that robots and/or AI should be carefully managed. This figure is comparable to with survey results from EU respondents.
Americans consider all of the thirteen AI governance challenges presented in the survey to be important for governments and technology companies to manage carefully. The governance challenges perceived to be the most likely to impact people around the world within the next decade and rated the highest in issue importance were 3 :
Preventing AI-assisted surveillance from violating privacy and civil liberties
Preventing AI from being used to spread fake and harmful content online
Preventing AI cyber attacks against governments, companies, organizations, and individuals
Protecting data privacy
We also asked the above question, but focused on the likelihood of the governance challenge impacting solely Americans (rather than people around the world). Americans perceive that all of the governance challenges presented, except for protecting data privacy and ensuring that autonomous vehicles are safe, are slightly more likely to impact people around the world than to impact Americans within the next 10 years.
Americans have discernibly different levels of trust in various organizations to develop and manage 4 AI for the best interests of the public. Broadly, the public puts the most trust in university researchers (50% reporting “a fair amount of confidence” or “a great deal of confidence”) and the U.S. military (49%); followed by scientific organizations, the Partnership on AI, technology companies (excluding Facebook), and intelligence organizations; followed by U.S. federal or state governments, and the UN; followed by Facebook.
Americans express mixed support (1) for the U.S. investing more in AI military capabilities and (2) for cooperating with China to avoid the dangers of an AI arms race. Providing respondents with information about the risks of a U.S.-China AI arms race slightly decreases support for the U.S. investing more in AI military capabilities. Providing a pro-nationalist message or a message about AI’s threat to humanity failed to affect Americans’ policy preferences.
The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028 . We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task. See Appendix B for a detailed definition.

Images Powered by Shutterstock