Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including taking action against what they perceive to be unethical uses of AI (Belfield, 2020; Van Noorden, 2020). Nevertheless, this influential group's attitudes are not well understood, which undermines our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers' views, we conducted a survey of those who published in the top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers (Grace, Salvatier, Dafoe, Zhang, Evans, 2018) and a 2018 survey of the US public (Zhang Dafoe, 2020). We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized and that ML institutions should conduct pre-publication review to assess potential harms. Being closer to the technology itself, AI/ML re-searchers are well placed to highlight new risks and develop technical solutions, so this novel attempt to measure their attitudes has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI.
READ FULL TEXT