Attesting Digital Discrimination Using Norms.
DOI:
https://doi.org/10.9781/ijimai.2021.02.008Keywords:
Discrimination, Attestation, Norms, Machine LearningAbstract
More and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's targeted with high-interest loans and low credit scores, and women being undervalued by online marketing, fueled public distrust in machine learning. This poses a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions, and is motivating research in the area of algorithmic fairness and fair ML. Much of that research is aimed at providing detailed statistics, metrics and algorithms which are difficult to interpret and use by someone without technical skills. This paper tries to bridge the gap between lay users and fairness metrics by using simpler notions and concepts to represent and reason about digital discrimination. In particular, we use norms as an abstraction to communicate situations that may lead to algorithms committing discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to attest whether ML systems violate these norms.
Downloads
References
[1] N. Criado, J. Such, “Digital discrimination,” in Algorithmic Regulation, Oxford University Press, 2019.
[2] C. O’neil, Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
[3] A. Altman, “Discrimination,” 2011.
[4] M. B. Zafar, I. Valera, M. Gomez Rodriguez, K. P. Gummadi, “Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment,” in Proceedings of the 26th international conference on world wide web, 2017, pp. 1171–1180.
[5] S. Verma, J. Rubin, “Fairness definitions explained,” in 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), 2018, pp. 1–7, IEEE.
[6] S. Barocas, A. Selbst, “Big Data’s Disparate Impact,” California law review, vol. 104, no. 1, pp. 671–729, 2016, doi: http://dx.doi.org/10.15779/Z38BG31
[7] C. Cook, R. Diamond, J. Hall, J. A. List, P. Oyer, “The gender earnings gap in the gig economy: Evidence from over a million rideshare drivers,” National Bureau of Economic Research, 2018.
[8] S. Hajian, J. Domingo-Ferrer, “A methodology for direct and indirect discrimination prevention in data mining,” IEEE transactions on knowledge and data engineering, vol. 25, no. 7, pp. 1445–1459, 2012.
[9] F. Calmon, D. Wei, B. Vinzamuri, K. N. Ramamurthy, K. R. Varshney, “Optimized preprocessing for discrimination prevention,” in Advances in Neural Information Processing Systems, 2017, pp. 3992–4001.
[10] F. Kamiran, T. Calders, “Data preprocessing techniques for classification without discrimination,” Knowledge and Information Systems, vol. 33, no. 1, pp. 1–33, 2012.
[11] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, S. Venkatasubramanian, “Certifying and removing disparate impact,” in proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2015, pp. 259–268.
[12] T. M. Cover, J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.
[13] W. G. Cochran, “The χ2 test of goodness of fit,” The Annals of Mathematical Statistics, pp. 315–345, 1952.
[14] F. Freese, Elementary statistical methods for foresters. No. 317, US Department of Agriculture, 1967.
[15] R. K. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, et al., “Ai fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias,” IBM Journal of Research and Development, vol. 63, no. 4/5, pp. 4–1, 2019.
[16] A. Chouldechova, “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments,” Big data, vol. 5, no. 2, pp. 153–163, 2017.
[17] J. Angwin, J. Larson, S. Mattu, L. Kirchner, “Machine bias risk assessments in criminal sentencing,” ProPublica, May, vol. 23, 2016.
[18] X. Ferrer, T. van Nuenen, J. M. Such, M. Coté, N. Criado, “Bias and Discrimination in AI: a cross-disciplinary perspective,” IEEE Technology and Society Magazine (forthcoming), 2020.
[19] D. Pedreschi, S. Ruggieri, F. Turini, “Integrating induction and deduction for finding evidence of discrimination,” in Proceedings of the 12th International Conference on Artificial Intelligence and Law, 2009, p. 157– 166.
[20] F. Tramer, V. Atlidakis, R. Geambasu, D. Hsu, J. P. Hubaux, M. Humbert, A. Juels, H. Lin, “Fairtest: Discovering unwarranted associations in datadriven applications,” in 2017 IEEE European Symposium on Security and Privacy (EuroS&P), 2017, pp. 401–416, IEEE.
[21] I. Zliobaite, “A survey on measuring indirect discrimination in machine learning,” arXiv preprint arXiv:1511.00148, 2015.
[22] A. Datta, S. Sen, Y. Zick, “Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems,” in 2016 IEEE symposium on security and privacy (SP), 2016, pp. 598–617, IEEE.
[23] T. van Nuenen, X. Ferrer, J. M. Such, M. Cote, “Transparency for whom? assessing discriminatory artificial intelligence,” Computer, vol. 53, no. 11, pp. 36–44, 2020.
[24] A. Caliskan, J. J. Bryson, A. Narayanan, “Semantics derived automatically from language corpora contain human-like biases,” Science, vol. 356, no. 6334, pp. 183–186, 2017, doi: 10.1126/science.aal4230.
[25] N. Garg, L. Schiebinger, D. Jurafsky, J. Zou, “Word embeddings quantify 100 years of gender and ethnic stereotypes,” PNAS 2018, vol. 115, no. 16, pp. E3635–E3644, 2018.
[26] X. Ferrer, T. van Nuenen, J. M. Such, N. Criado, “Discovering and categorising language biases in reddit,” in International AAAI Conference on Web and Social Media (ICWSM 2021) (forthcoming), 2020.
[27] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, R. Zemel, “Fairness through awareness,” in ITCS 2012, 2012, pp. 214–226, ACM.
[28] N. Grgić-Hlača, M. Zafar, K. Gummadi, A. Weller, “Beyond Distributive Fairness in Algorithmic Decision Making,” AAAI, pp. 51–60, 2018, doi: 10 .2174/1381612821666150416100516.
[29] N. Kilbertus, M. Carulla, G. Parascandolo, M. Hardt, D. Janzing, B. Schölkopf, “Avoiding discrimination through causal reasoning,” in NIPS’17, 2017, pp. 656–666.
[30] S. T. Mueller, R. R. Hoffman, W. Clancey, A. Emrey, G. Klein Macrocognition, “DARPA XAI Literature Review p. Explanation in Human-AI Systems: A Literature Meta-Review Synopsis of Key Ideas and Publications and Bibliography for Explainable AI Prepared by Task Area 2 Institute for Human and Machine Cognition,” no. February 2019, 2019.
Downloads
Published
-
Abstract241
-
PDF53






