Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI.

Authors

DOI:

https://doi.org/10.9781/ijimai.2021.02.011

Keywords:

AI Governance, Forecasting, Anticipatory Governance, Participatory Technology
Supporting Agencies
We thank reviewers for their particularly detailed comments and engagement with this paper, the scholars at the Leverhulme Centre for the Future of Intelligence for fruitful discussions after our presentation, as well as the attendees of the workshop Evaluating Progress in AI at the European Conference on AI (Aug 2020) for recognizing the potential of this work. We particularly thank Carolyn Ashurst and Luke Kemp for their efforts and commentary on our drafts.

Abstract

We propose a method for identifying early warning signs of transformative progress in artificial intelligence (AI), and discuss how these can support the anticipatory and democratic governance of AI. We call these early warning signs ‘canaries’, based on the use of canaries to provide early warnings of unsafe air pollution in coal mines. Our method combines expert elicitation and collaborative causal graphs to identify key milestones and identify the relationships between them. We present two illustrations of how this method could be used: to identify early warnings of harmful impacts of language models; and of progress towards high-level machine intelligence. Identifying early warning signs of transformative applications can support more efficient monitoring and timely regulation of progress in AI: as AI advances, its impacts on society may be too great to be governed retrospectively. It is essential that those impacted by AI have a say in how it is governed. Early warnings can give the public time and focus to influence emerging technologies using democratic, participatory technology assessments. We discuss the challenges in identifying early warning signals and propose directions for future work.

Downloads

Download data is not yet available.

References

[1] K. Crawford et al., ‘AI Now Report 2019’, AI 2019 Report, p. 100, 2019.

[2] S. Russell, Human Compatible. Viking Press, 2019.

[3] C. Cath, S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi, ‘Artificial Intelligence and the “Good Society”: the US, EU, and UK approach’, Sci. Eng. Ethics, vol. 24, no. 2, pp. 505–528, Apr. 2018, doi: 10.1007/s11948- 017-9901-7.

[4] J. Whittlestone, R. Nyrup, A. Alexandrova, K. Dihal, and S. Cave, ‘Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research’, 2019, p. 59.

[5] Y. K. Dwivedi et al., ‘Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy’, Int. J. Inf. Manag., p. 101994, Aug. 2019, doi: 10.1016/j.ijinfomgt.2019.08.002.

[6] R. Gruetzemacher and J. Whittlestone, ‘The Transformative Potential of Artificial Intelligence’, ArXiv191200747 Cs, Sep. 2020, Accessed: Jan. 09, 2021. [Online]. Available: http://arxiv.org/abs/1912.00747.

[7] M. Brundage et al., ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, ArXiv180207228 Cs, Feb. 2018, Accessed: Jan. 15, 2021. [Online]. Available: http://arxiv.org/abs/1802.07228.

[8] P. Howard, Lie Machines, How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives. Yale: Yale University Press, 2020.

[9] C. B. Frey and M. A. Osborne, ‘The future of employment: How susceptible are jobs to computerization?’, Technol. Forecast. Soc. Change, vol. 114, pp. 254–280, Jan. 2017, doi: 10.1016/j.techfore.2016.08.019.

[10] K. Grace, J. Salvatier, A. Dafoe, B. Zhang, and O. Evans, ‘Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts’, J. Artif. Intell. Res., vol. 62, pp. 729–754, Jul. 2018, doi: 10.1613/jair.1.11222.

[11] C. Z. Cremer, ‘Deep Limitations? Examining Expert Disagreement over Deep Learning’, Prog. Artif. Intell. Springer, to be published 2021.

[12] D. Collingridge, The social control of technology. London: Frances Pinter, 1980.

[13] O. Etzioni, ‘How to know if artificial intelligence is about to destroy civilization’, MIT Technology Review. https://www.technologyreview.com/s/615264/artificial-intelligence-destroy-civilization-canaries-robotoverlords-take-over-world-ai/ accessed Mar. 12, 2020).

[14] A. Dafoe, ‘The academics preparing for the possibility that AI will destabilise global politics’, 80,000 Hours, 2018. https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/ (accessed Jan. 15, 2021).

[15] V. C. Müller and N. Bostrom, ‘Future Progress in Artificial Intelligence: A Survey of Expert Opinion’, in Fundamental Issues of Artificial Intelligence, V. C. Müller, Ed. Cham: Springer International Publishing, 2016, pp. 555–572.

[16] S. D. Baum, B. Goertzel, and T. G. Goertzel, ‘How long until human-level AI? Results from an expert assessment’, Technol. Forecast. Soc. Change, vol. 78, no. 1, pp. 185–195, Jan. 2011, doi: 10.1016/j.techfore.2010.09.006.

[17] S. Beard, T. Rowe, and J. Fox, ‘An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards’, Futures, vol. 115, p. 102469, Jan. 2020, doi: 10.1016/j.futures.2019.102469.

[18] P. E. Tetlock and D. Gardner, Superforecasting: the art and science of prediction, First edition. New York: Crown Publishers, 2015.

[19] N. Benaich and I. Hogarth, ‘State of AI Report 2020’, 2020. https://www.stateof.ai/ (accessed Jan. 15, 2021).

[20] P. Eckersley and Y. Nasser, ‘AI Progress Measurement’, Electronic Frontier Foundation, Jun. 12, 2017. https://www.eff.org/ai/metrics (accessed Jan. 15, 2021).

[21] ‘Papers with Code’, Available at: https://paperswithcode.com (accessed Feb. 08, 2021).

[22] R. Perrault et al., ‘The AI Index 2019 Annual Report’, AI Index Steer. Comm. Hum.-Centered AI Inst. Stanf. Univ. Stanf. CA, 2019.

[23] Gruetzemacher, ‘A Holistic Framework for Forecasting Transformative AI’, Big Data Cogn. Comput., vol. 3, no. 3, p. 35, Jun. 2019, doi: 10.3390/ bdcc3030035.

[24] H. A. Linstone and M. Turoff, The delphi method. Addison-Wesley Reading, MA, 1975.

[25] S. M. West, M. Whittaker, and K. Crawford, ‘Discriminating Systems: Gender, Race and Power in AI’, AI Now Institute, 2019. [Online]. Available: Retrieved from https://ainowinstitute.org/discriminatingsystems.html.

[26] P. Nemitz and M. Pfeffer, Prinzip Mensch - Macht, Freiheit und Demokratie im Zeitalter der Künstlichen Intelligenz. Verlag J.H.W. Dietz Nachf., 2020.

[27] M. Ipsos, ‘Public views of Machine Learning: Findings from public research and engagement conducted on behalf of the Royal Society’, The Royal Society, 2017. [Online]. Available: https://royalsociety.org/-/media/policy/projects/machine-learning/publications/public-views-ofmachine-learning-ipsos-mori.pdf.

[28] The RSA, ‘Artificial Intelligence: Real Public Engagement.’, oyal Society for the encouragement of Arts, Manufactures and Commerce, London, 2018.

[29] T. Cohen, J. Stilgoe, and C. Cavoli, ‘Reframing the governance of automotive automation: insights from UK stakeholder workshops’, J. Responsible Innov., vol. 5, no. 3, pp. 257–279, Sep. 2018, doi: 10.1080/23299460.2018.1495030.

[30] M. Lengwiler, ‘Participatory Approaches in Science and Technology: Historical Origins and Current Practices in Critical Perspective’, Sci. Technol. Hum. Values, vol. 33, no. 2, pp. 186–200, Mar. 2008, doi: 10.1177/0162243907311262.

[31] M. Rask, ‘The tragedy of citizen deliberation – two cases of participatory technology assessment’, Technol. Anal. Strateg. Manag., vol. 25, no. 1, pp. 39–55, Jan. 2013, doi: 10.1080/09537325.2012.751012.

[32] J. Chilvers, ‘Deliberating Competence: Theoretical and Practitioner Perspectives on Effective Participatory Appraisal Practice’, Sci. Technol. Hum. Values, vol. 33, no. 2, pp. 155–185, Mar. 2008, doi: 10.1177/0162243907307594.

[33] G. Abels, ‘Participatory Technology Assessment And The “Institutional Void”: Investigating Democratic Theory And Representative Politics” published on 01 Jan 2010 by Brill.’, in Democratic Transgressions of Law, vol. 112, Brill, 2010, pp. 237–268.

[34] P. Biegelbauer and A. Loeber, ‘The Challenge of Citizen Participation to Democracy’, Inst. Für Höhere Stud. - Inst. Adv. Stud. IHS, p. 46, 2010.

[35] G. Rowe and L. J. Frewer, ‘A Typology of Public Engagement Mechanisms’, Sci. Technol. Hum. Values, vol. 30, no. 2, pp. 251–290, Apr. 2005, doi: 10.1177/0162243904271724.

[36] L. Hong and S. E. Page, ‘Groups of diverse problem solvers can outperform groups of high-ability problem solvers’, Proc. Natl. Acad. Sci., vol. 101, no. 46, pp. 16385–16389, Nov. 2004, doi: 10.1073/pnas.0403723101.

[37] H. Landemore, Democratic Reason. Princeton: Princeton University Press, 2017.

[38] S. Joss and S. Bellucci, Participatory Technology Assessment: European Perspectives. London: Center for the Study of Democracy, 2002.

[39] Y. Zhao, C. Fautz, L. Hennen, K. R. Srinivas, and Q. Li, ‘Public Engagement in the Governance of Science and Technology’, in Science and Technology Governance and Ethics: A Global Perspective from Europe, India and China, M. Ladikas, S. Chaturvedi, Y. Zhao, and D. Stemerding, Eds. Cham: Springer International Publishing, 2015, pp. 39–51.

[40] M. T. Rask et al., Public Participation, Science and Society: Tools for Dynamic and Responsible Governance of Research and Innovation. Routledge - Taylor & Francis Group, 2018.

[41] J. Burgess and J. Chilvers, ‘Upping the ante: a conceptual framework for designing and evaluating participatory technology assessments’, Sci. Public Policy, vol. 33, no. 10, pp. 713–728, Dec. 2006, doi: 10.3152/147154306781778551.

[42] Y. T. Hsiao, S.-Y. Lin, A. Tang, D. Narayanan, and C. Sarahe, ‘vTaiwan: An Empirical Study of Open Consultation Process in Taiwan’, SocArXiv, preprint, Jul. 2018. doi: 10.31235/osf.io/xyhft.

[43] J. Hansen, ‘Operationalising the public in participatory technology assessment: A framework for comparison applied to three cases’, Sci. Public Policy, vol. 33, no. 8, pp. 571–584, Oct. 2006, doi: 10.3152/147154306781778678.

[44] T.-P. Ertiö, P. Tuominen, and M. Rask, ‘Turning Ideas into Proposals: A Case for Blended Participation During the Participatory Budgeting Trial in Helsinki’, in Electronic Participation: ePart 2019, Jul. 2019, pp. 15–25, doi: 10.1007/978-3-030-27397-2_2.

[45] M. Rask, ‘Foresight — balancing between increasing variety and productive convergence’, Technol. Forecast. Soc. Change - TECHNOL FORECAST SOC CHANGE, vol. 75, pp. 1157–1175, Oct. 2008, doi: 10.1016/j.techfore.2007.12.002.

[46] S. Mauksch, H. A. von der Gracht, and T. J. Gordon, ‘Who is an expert for foresight? A review of identification methods’, Technol. Forecast. Soc. Change, vol. 154, p. 119982, May 2020, doi: 10.1016/j.techfore.2020.119982.

[47] J. Saldivar, C. Parra, M. Alcaraz, R. Arteta, and L. Cernuzzi, ‘Civic Technology for Social Innovation: A Systematic Literature Review’, Comput. Support. Coop. Work CSCW, vol. 28, no. 1–2, pp. 169–207, Apr. 2019, doi: 10.1007/s10606-018-9311-7.

[48] T. Kariotis and J. Darakhshan, ‘Fighting Back Algocracy: The need for new participatory approaches to technology assessment’, in Proceedings of the 16th Participatory Design Conference 2020 - Participation(s) Otherwise - Volume 2, Manizales Colombia, Jun. 2020, pp. 148–153, doi: 10.1145/3384772.3385151.

[49] M. Whitman, C. Hsiang, and K. Roark, ‘Potential for participatory big data ethics and algorithm design: a scoping mapping review’, in Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial - Volume 2, New York, NY, USA, Aug. 2018, pp. 1–6, doi: 10.1145/3210604.3210644.

[50] C. Buckner and K. Yang, ‘Mating dances and the evolution of language: What’s the next step?’, Biol. Philos., vol. 32, 2017, doi: 10.1007/s10539- 017-9605-z.

[51] Y. LeCun, Y. Bengio, and G. Hinton, ‘Deep learning’, Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.

[52] S. E. Page, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools and Societies. Princeton: Princeton University Press, 2008.

[53] A. J. Scavarda, T. Bouzdine-Chameeva, S. M. Goldstein, J. M. Hays, and A. V. Hill, ‘A Review of the Causal Mapping Practice and Research Literature’, in Abstract number: 002-0256, Cancun, Mexico, 2004, p. 21.

[54] L. Markíczy, and J. Goldberg, ‘A method for eliciting and comparing causal maps’, J. Manag., vol. 21, no. 2, pp. 305–333, Jan. 1995, doi: 10.1016/0149-2063(95)90060-8.

[55] C. Eden and F. Ackermann, ‘Cognitive mapping expert views for policy analysis in the public sector’, Eur. J. Oper. Res., vol. 152, no. 3, pp. 615–630, Feb. 2004, doi: 10.1016/S0377-2217(03)00061-4.

[56] C. Eden, ‘ON THE NATURE OF COGNITIVE MAPS’, 1992, doi: 10.1111/ J.1467-6486.1992.TB00664.X.

[57] F. Ackerman, J. Bryson, and C. Eden, Visible Thinking, Unlocking Causal Mapping for Practical Business Results. John Wiley & Sons, 2004.

[58] G. Montibeller and V. Belton, ‘Causal maps and the evaluation of decision options—a review’, J. Oper. Res. Soc., vol. 57, no. 7, pp. 779–791, Jul. 2006, doi: 10.1057/palgrave.jors.2602214.

[59] A. J. Scavarda, T. Bouzdine-Chameeva, S. M. Goldstein, J. M. Hays, and A. V. Hill, ‘A Methodology for Constructing Collective Causal Maps*’, Decis. Sci., vol. 37, no. 2, pp. 263–283, May 2006, doi: 10.1111/j.1540- 5915.2006.00124.x.

[60] C. Eden, F. Ackermann, and S. Cropper, ‘The Analysis of Cause Maps’, J. Manag. Stud., vol. 29, no. 3, pp. 309–324, 1992, doi: https://doi.org/10.1111/j.1467-6486.1992.tb00667.x.

[61] F. Ackermann and C. Eden, ‘Using Causal Mapping with Group Support Systems to Elicit an Understanding of Failure in Complex Projects: Some Implications for Organizational Research’, Group Decis. Negot., vol. 14, no. 5, pp. 355–376, Sep. 2005, doi: 10.1007/s10726-005-8917-6.

[62] C. Eden, F. Ackermann, J. Bryson, G. Richardson, D. Andersen, and C. Finn, ‘Integrating Modes of Policy Analysis and Strategic Management Practice: Requisite Elements and Dilemmas’, p. 13, 2009.

[63] L.-M. Neudert and P. Howard, ‘Ready to vote: elections, technology and political campaigning in the United Kingdom’, Oxford Technology and Elections Commission, Report, Oct. 2019. Accessed: Jan. 11, 2021. [Online]. Available: https://apo.org.au/node/263976.

[64] G. Bolsover and P. Howard, ‘Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda’, Big Data, vol. 5, no. 4, pp. 273–276, Dec. 2017, doi: czz.

[65] M. J. Mazarr, R. Bauer, A. Casey, S. Heintz, and L. J. Matthews, ‘The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment’, Oct. 2019, Accessed: Jan. 14, 2021. [Online]. Available: https://www.rand.org/pubs/research_reports/RR2714.html.

[66] T. Wu, The Attention Merchants: From the Daily Newspaper to Social Media, How Our Time and Attention is Harvested and Sold. London: Antlantic Books, 2017.

[67] K. Starbird, ‘Disinformation’s spread: bots, trolls and all of us’, Nature, vol. 571, no. 7766, pp. 449–450, Jul. 2019.

[68] R. Gorwa and D. Guilbeault, ‘Unpacking the Social Media Bot: A Typology to Guide Research and Policy’, Policy Internet, vol. 12, no. 2, pp. 225–248, Jun. 2020, doi: 10.1002/poi3.184.

[69] E. Ferrara, ‘Disinformation and Social Bot Operations in the Run Up to the 2017 French Presidential Election’, Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 2995809, Jun. 2017. doi: 10.2139/ ssrn.2995809.

[70] C. Shao, G. L. Ciampaglia, O. Varol, K.-C. Yang, A. Flammini, and F. Menczer, ‘The spread of low-credibility content by social bots’, Nat. Commun., vol. 9, no. 1, Art. no. 1, Nov. 2018, doi: 10.1038/s41467-018- 06930-7.

[71] P. N. Howard, S. Woolley, and R. Calo, ‘Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration’, J. Inf. Technol. Polit., vol. 15, no. 2, pp. 81–93, Apr. 2018, doi: 10.1080/19331681.2018.1448735.

[72] M. Chessen, ‘The MADCOM Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy… and What can be Done About It.’, in Artificial Intelligence Safety and Security, Chapman and Hall/CRC Press, 2018, pp. 127–144.

[73] K. Kertysova, ‘Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered’, 2018, doi: 10.1163/18750230-02901005.

[74] J. Brainard and P. R. Hunter, ‘Misinformation making a disease outbreak worse: outcomes compared for influenza, monkeypox, and norovirus’, SIMULATION, vol. 96, no. 4, pp. 365–374, Apr. 2020, doi: 10.1177/0037549719885021.

[75] E. Seger, S. Avin, G. Pearson, M. Briers, S. O Heigeartaigh, and H. Bacon, ‘Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world’, Allan Turing Institute, CSER, dstl, 2020. Accessed: Jan. 15, 2021. [Online]. Available: https://www.turing.ac.uk/sites/default/files/2020-10/ epistemic-security-report_final.pdf.

[76] K. H. Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don’t, Can’t, and Do Know. Oxford University Press, 2020.

Downloads

Published

2021-03-01
Metrics
Views/Downloads
  • Abstract
    292
  • PDF
    113

How to Cite

Zoe Cremer, C. and Whittlestone, J. (2021). Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI. International Journal of Interactive Multimedia and Artificial Intelligence, 6(5), 100–109. https://doi.org/10.9781/ijimai.2021.02.011