A Review of Bias and Fairness in Artificial Intelligence.
DOI:
https://doi.org/10.9781/ijimai.2023.11.001Keywords:
Bias, Fairness, Responsible Artificial IntelligenceAbstract
Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models.
Downloads
References
R. Kennedy, “The ethical implications of lawtech,” in Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 2021, pp. 198–207, Springer International Publishing.
S. Camaréna, “Engaging with artificial intelligence (ai) with a bottom-up approach for the purpose of sustainability: Victorian farmers market association, melbourne australia,” Sustainability, vol. 13, no. 16, 2021, doi: 10.3390/su13169314.
S. Strauß, “Deep automation bias: How to tackle a wicked problem of ai?,” Big Data and Cognitive Computing, vol. 5, no. 2, 2021, doi: 10.3390/bdcc5020018.
A. Nadeem, O. Marjanovic, B. Abedin, “Gender bias in ai: Implications for managerial practices,” in Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 2021, pp. 259–270, Springer International Publishing.
S. Parsheera, “A gendered perspective on artificial intelligence,” in 2018 International Telecommunication Union Kaleidoscope: Machine Learning for a 5G Future, Santa Fe, Argentina, November 26-28, 2018, 2018, pp. 1–7, IEEE.
T. K. Gilbert, Y. Mintz, “Epistemic therapy for bias in automated decision-making,” in Proceedings of the 2019 Conference on AI, Ethics, and Society, New York, NY, USA, 2019, p. 61–67, Association for Computing Machinery.
D. A. da Silva, H. D. B. Louro, G. S. Goncalves, J. C. Marques, L. A. V. Dias, A. M. da Cunha, P. M. Tasinaffo, “Could a conversational ai identify offensive language?,” Information, vol. 12, no. 10, 2021, doi: 10.3390/info12100418.
C. Zhao, C. Li, J. Li, F. Chen, “Fair meta-learning for few-shot classification,” in 2020 IEEE International Conference on Knowledge Graph, Online, August 9-11, 2020, 2020, pp. 275–282, IEEE.
R. S. Baker, A. Hawn, “Algorithmic bias in education,” International Journal of Artificial Intelligence in Education, pp. 1052–1092, 12 2021, doi: 10.1007/s40593-021-00285-9.
R. R. Fletcher, A. Nakeshimana, O. Olubeko, “Addressing fairness, bias, and appropriate use of artificial intelligence and machine learning in global health,” Frontiers in Artificial Intelligence, vol. 3, 4 2021, doi: 10.3389/frai.2020.561802.
M. Loi, A. Ferrario, E. Viganò, “Transparency as design publicity: explaining and justifying inscrutable algorithms,” Ethics and Information Technology, vol. 23, pp. 253–263, 9 2021, doi: 10.1007/s10676-020-09564-w.
E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M. E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder-Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab, “Bias in data-driven artificial intelligence systems—an introductory survey,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, 5 2020, doi: 10.1002/widm.1356.
D. Pessach, E. Shmueli, “Improving fairness of artificial intelligence algorithms in privileged- group selection bias data settings,” Expert Systems with Applications, vol. 185, 12 2021, doi: 10.1016/j.eswa.2021.115667.
D. Pessach, E. Shmueli, “A review on fairness in machine learning,” Association for Computing Machinery: Computing Surveys, vol. 55, feb 2022, doi: 10.1145/3494672.
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, A. Galstyan, “A survey on bias and fairness in machine learning,” Association for Computing Machinery: Computing Surveys, vol. 54, no. 6, pp. 115:1–115:35, 2021, doi: 10.1145/3457607.
S. Khenissi, B. Mariem, O. Nasraoui, “Theoretical modeling of the iterative properties of user discovery in a collaborative filtering recommender system,” in Proceedings of the 14th Conference on Recommender Systems, New York, NY, USA, 2020, p. 348–357, Association for Computing Machinery.
A. Peña, I. Serna, A. Morales, J. Fiérrez, “Faircvtest demo: Understanding bias in multimodal learning with a testbed in fair automatic recruitment,” in International Conference on Multimodal Interaction, Virtual Event, The Netherlands, October 25-29, 2020, 2020, pp. 760–761, Association for Computing Machinery.
J. L. Davis, A. Williams, M. W. Yang, “Algorithmic reparation,” Big Data and Society, vol. 8, 2021, doi: 10.1177/20539517211044808.
A. M. Antoniadi, Y. Du, Y. Guendouz, L. Wei, C. Mazo, B. A. Becker, C. Mooney, “Current challenges and future opportunities for xai in machine learning- based clinical decision support systems: A systematic review,” Applied Sciences (Switzerland), vol. 11, 6 2021, doi: 10.3390/app11115088.
K. Sokol, “Fairness, accountability and transparency in artificialintelligence: A case study of logical predictive models,” in Proceedings of the 2019 Conference on AI, Ethics, and Society, New York, NY, USA, 2019, p. 541–542, Association for Computing Machinery.
K. Gade, S. C. Geyik, K. Kenthapadi, V. Mithal, A. Taly, “Explainable AI in industry: practical challenges and lessons learned: implications tutorial,” in Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27-30, 2020, 2020, p. 699, Association for Computing Machinery.
A. Stevens, P. Deruyck, Z. V. Veldhoven, J. Vanthienen, “Explainability and fairness in machine learning: Improve fair end-to-end lending for kiva,” in 2020 IEEE Symposium Series on Computational Intelligence Canberra, Australia, December 1-4, 2020, 2020, pp. 1241–1248, IEEE.
R. Bellamy, K. Dey, M. Hind, S. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilovic, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. Varshney, Y. Zhang, “Ai fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias,” IBM Journal of Research and Development, vol. PP, 09 2019, doi: 10.1147/JRD.2019.2942287.
L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. A. Specter, L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in 5th IEEE International Conference on Data Science and Advanced Analytics, Turin, Italy, October 1-3, 2018, 2018, pp. 80–89, IEEE.
E. Mutlu, O. O. Garibay, “A quantum leap for fairness: Quantum bayesian approach for fair decision making,” in Human-Computer Interaction (HCI) International 2021 - Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence, 2021, pp. 489–499, Springer International Publishing.
J. Stoyanovich, B. Howe, S. Abiteboul, G. Miklau, A. Sahuguet, G. Weikum, “Fides: Towards a platform for responsible data science,” in Proceedings of the 29th International Conference on Scientific and Statistical Database Management, New York, NY, USA, 2017, Association for Computing Machinery.
C. Addis, M. Kutar, “AI management an exploratory survey of the influence of GDPR and FAT principles,” in 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, Leicester, United Kingdom, August 19-23, 2019, 2019, pp. 342–347, IEEE.
A. B. Arrieta, N. Díaz-Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, Herrera, “Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai,” Information Fusion, vol. 58, pp. 82–115, 2020, doi: https://doi.org/10.1016/j.inffus.2019.12.012.
V. Bhargava, M. Couceiro, A. Napoli, “Limeout: An ensemble approach to improve process fairness,” in European Conference on Machine Learning and Knowledge Discovery in Databases 2020 Workshops, 2020, pp. 475– 491, Springer International Publishing.
S. Wachter, B. Mittelstadt, C. Russell, “Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai,” Computer Law and Security Review, vol. 41, 7 2021, doi: 10.1016/j.clsr.2021.105567.
S. Abbasi-Sureshjani, R. Raumanns, B. E. J. Michels, Schouten, V. Cheplygina, “Risk of training diagnostic algorithms on data with demographic bias,” in Interpretable and Annotation-Efficient Learning for Medical Image Computing, 2020, pp. 183–192, Springer International Publishing.
M. DeCamp, C. Lindvall, “Latent bias and the implementation of artificialintelligence in medicine,” Journal of the American Medical Informatics Association, vol. 27, pp. 2020–2023, 12 2020, doi: 10.1093/jamia/ocaa094.
Y. Zheng, S. Wang, J. Zhao, “Equality of opportunity in travel behavior prediction with deep neural networks and discrete choice models,” Transportation Research Part C: Emerging Technologies, vol. 132, p. 103410, 2021, doi: https://doi.org/10.1016/j.trc.2021.103410.
V. V. Vesselinov, B. S. Alexandrov, D. O’Malley, “Nonnegative tensor factorization for contaminant source identification,” Journal of Contaminant Hydrology, vol. 220, pp. 66–97, 2019, doi: https://doi.org/10.1016/j.jconhyd.2018.11.010.
O. J. Akintande, “Algorithm fairness through data inclusion, participation, and reciprocity,” in Database Systems for Advanced Applications, 2021, pp. 633–637, Springer International Publishing.
S. Park, H. Ko, “Machine learning and law and economics: A preliminary overview,” Asian Journal of Law and Economics, vol. 11, 8 2020, doi: 10.1515/ajle-2020-0034.
F. Marcinkowski, K. Kieslich, C. Starke, M. Lünich, “Implications of AI (un-)fairness in higher education admissions: the effects of perceived AI (un-)fairness on exit, voice and organizational reputation,” in Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27-30, 2020, 2020, pp. 122– 130, Association for Computing Machinery.
D. Solans, F. Fabbri, C. Calsamiglia, C. Castillo, F. Bonchi, “Comparing equity and effectiveness of different algorithms in an application forhe room rental market,” in Proceedings of the 2021 Conference on AI, Ethics, and Society, New York, NY, USA, 2021, p. 978–988, Association for Computing Machinery.
S. Hajian, F. Bonchi, C. Castillo, “Algorithmic bias: From discrimination discovery to fairness- aware data mining,” in Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 2016, pp. 2125–2126, Association for Computing Machinery.
C. M. Madla, F. K. H. Gavins, H. A. Merchant, M. Orlu, S. Murdan, A. W. Basit, “Let’s talk about sex: Differences in drug therapy in males and females,” Advanced Drug Delivery Reviews, vol. 175, p. 113804, 2021, doi: https://doi.org/10.1016/j.addr.2021.05.014.
G. Currie, K. E. Hawk, “Ethical and legal challenges of artificialintelligence in nuclear medicine,” Seminars in Nuclear Medicine, vol. 51, pp. 120–125, 2021, doi: https://doi.org/10.1053/j.semnuclmed.2020.08.001.
G. Starke, E. D. Clercq, B. S. Elger, “Towards a pragmatist dealing with algorithmic bias in medical machine learning,” Medicine, Health Care and Philosophy, vol. 24, pp. 341–349, 9 2021, doi: 10.1007/s11019-021-10008-5.
A. M. Fejerskov, “Algorithmic bias and the (false) promise of numbers,” Global Policy, vol. 12, pp. 101– 103, 7 2021, doi: 10.1111/1758-5899.12915.
C. Panigutti, A. Perotti, A. Panisson, P. Bajardi, D. Pedreschi, “Fairlens: Auditing black-box clinical decision support systems,” Information Processing & Management, vol. 58, p. 102657, 2021, doi: https://doi.org/10.1016/j.ipm.2021.102657.
S. Kino, Y.-T. Hsu, K. Shiba, Y.-S. Chien, C. Mita, I. Kawachi, A. Daoud, “A scoping review on the use of machine learning in research on social determinants of health: Trends and research prospects,” SSM- Population Health, vol. 15, p. 100836, 2021, doi: https://doi.org/10.1016/j.ssmph.2021.100836.
N. Norori, Q. Hu, F. M. Aellen, F. D. Faraci, A. Tzovara, “Addressing bias in big data and ai for health care: A call for open science,” Patterns, vol. 2, p. 100347, 2021, doi: https://doi.org/10.1016/j.patter.2021.100347.
L. Xu, “The dilemma and countermeasures of ai in educational application,” Pervasive Health: Pervasive Computing Technologies for Healthcare, pp. 289–294, 12 2020, doi: 10.1145/3445815.3445863.
W. Holmes, K. Porayska-Pomsta, K. Holstein, E. Sutherland, T. Baker, S. B. Shum, O. C. Santos, M. T. Rodrigo, M. Cukurova, I. I. Bittencourt, K. R. Koedinger, “Ethics of ai in education: Towards a community-wide framework,” International Journal of Artificial Intelligence in Education, 2021, doi: 10.1007/s40593-021-00239-1.
A. Peña, I. Serna, A. Morales, J. Fierrez, “Faircvtest demo: Understanding bias in multimodal learning with a testbed in fair automatic recruitment,” 2020 International Conference on Multimodal Interaction, pp. 760–761, 10 2020, doi: 10.1145/3382507.3421165.
A. Ortega, J. Fierrez, A. Morales, Z. Wang, M. de la Cruz, C. L. Alonso, T. Ribeiro, “Symbolic ai for xai: Evaluating lfit inductive programming for explaining biases in machine learning,” Computers, vol. 10, 11 2021, doi: 10.3390/computers10110154.
S. K. Misra, S. Das, S. Gupta, S. K. Sharma, “Public policy and regulatory challenges of artificial intelligence (ai),” Advances in Information and Communication Technology, vol. 617, pp. 100–111, 2020, doi: 10.1007/978-3-030-64849-7_10.
S. Pundhir, U. Ghose, V. Kumari, “Legitann: Neural network model with unbiased robustness,” in Proceedings of International Conference on Communication and Artificial Intelligence, 2021, pp. 385– 397, Springer Singapore.
S. Sharma, A. H. Gee, D. Paydarfar, J. Ghosh, “Fair- n: Fair and robust neural networks for structured data,” in Proceedings of the 2021 Conference on AI, Ethics, and Society, New York, NY, USA, 2021, p. 946–955, Association for Computing Machinery.
J. Kang, H. Tong, “Fair graph mining,” in The 30th International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, 2021, pp. 4849–4852, Association for Computing Machinery.
J. Bobadilla, R. Lara-Cabrera, Á. González- Prieto, F. Ortega, “DeepFair: Deep learning for improving fairness in recommender systems,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 6, p. 86, 2021, doi: 10.9781/ijimai.2020.11.001.
N. Martinez-Martin, Z. Luo, A. Kaushal, E. Adeli, A. Haque, S. S. Kelly, S. Wieten, M. K. Cho, D. Magnus, L. Fei-Fei, K. Schulman, A. Milstein, “Ethical issues in using ambient intelligence in health-care settings,” The Lancet Digital Health, vol. 3, pp. e115–e123, 2 2021, doi: 10.1016/S2589-7500(20)30275-2.
S. K. Kane, A. Guo, M. R. Morris, “Sense and accessibility: Understanding people with physical disabilities’ experiences with sensing systems,” in The 22nd International Conference on Computers and Accessibility, Virtual Event, Greece, October 26-28, 2020, 2020, pp. 42:1–42:14, Association for Computing Machinery.
A. Paviglianiti, E. Pasero, “VITAL-ECG: a de-bias algorithm embedded in a gender-immune device,” in 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, Roma, Italy, June 3-5, 2020, 2020, pp. 314–318, IEEE.
R. Mark, “Ethics of public use of ai and big data: The case of amsterdam’s crowdedness project,” The ORBIT Journal, vol. 2, no. 2, pp. 1–33, 2019, doi: https://doi.org/10.29297/orbit.v2i1.101.
C. E. Kontokosta, B. Hong, “Bias in smart city governance: How socio-spatial disparities in 311 complaint behavior impact the fairness of data-driven decisions,” Sustainable Cities and Society, vol. 64, p. 102503, 2021, doi: https://doi.org/10.1016/j.scs.2020.102503.
V. D. Badal, C. Nebeker, K. Shinkawa, Y. Yamada, K. E. Rentscher, H.-C. Kim, E. E. Lee, “Do words matter? detecting social isolation and loneliness in older adults using natural language processing,” Frontiers in Psychiatry, vol. 12, 11 2021, doi: 10.3389/fpsyt.2021.728732.
B. Richardson, D. Prioleau, K. Alikhademi, J. E. Gilbert, “Public accountability: Understanding sentiments towards artificial intelligence across dispositional identities,” in IEEE International Symposium on Technology and Society, Tempe, AZ, USA, November 12-15, 2020, 2020, pp. 489–496, IEEE.
D. Muralidhar, “Examining religion bias in AI text generators,” in Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021, 2021, pp. 273– 274, Association for Computing Machinery.
A. J. Larrazabal, N. Nieto, V. Peterson, D. H. Milone, E. Ferrante, “Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis,” Proceedings of the National Academy of Sciences of the United States of America, vol. 117, pp. 12592–12594, 6 2020, doi: 10.1073/pnas.1919012117.
E. Puyol-Antón, B. Ruijsink, S. K. Piechnik, S. Neubauer, S. E. Petersen, R. Razavi, A. P. King, “Fairness in cardiac mr image analysis: An investigation of bias due to data imbalance in deep learning based segmentation,” Medical Image Computing and Computer Assisted Intervention, vol. 12903 LNCS, pp. 413–423, 2021, doi: 10.1007/978- 3-030-87199-4_39.
A. Rajabi, O. O. Garibay, “Towards fairness in ai: Addressing bias in data using gans,” in Human- Computer Interaction (HCI) International 2021 - Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence, Cham, 2021, pp. 509–518, Springer International Publishing.
Y. Zhang, J. Sang, “Towards accuracy-fairness paradox: Adversarial example-based data augmentation for visual debiasing,” in Proceedings of the 28th International Conference on Multimedia, New York, NY, USA, 2020, p. 4346–4354, Association for Computing Machinery.
A. K. Singh, A. Blanco-Justicia, J. Domingo-Ferrer, D. Sánchez, D. Rebollo-Monedero, “Fair detection of poisoning attacks in federated learning,” in 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence, 2020, pp. 224–229, IEEE.
C. Fan, M. Esparza, J. Dargin, F. Wu, B. Oztekin, A. Mostafavi, “Spatial biases in crowdsourced data: Social media content attention concentrates on populous areas in disasters,” Computers, Environment and Urban Systems, vol. 83, p. 101514, 2020, doi: https://doi.org/10.1016/j.compenvurbsys.2020.101514.
R. Chiong, Z. Fan, Z. Hu, F. Chiong, “Using an improved relative error support vector machine for body fat prediction,” Computer Methods and Programs in Biomedicine, vol. 198, p. 105749, 2021, doi: https://doi.org/10.1016/j.cmpb.2020.105749.
W. Lee, H. Ko, J. Byun, T. Yoon, J. Lee, “Fair clustering with fair correspondence distribution,” Information Sciences, vol. 581, pp. 155–178, 2021, doi: https://doi.org/10.1016/j.ins.2021.09.010.
W. Zhang, A. Bifet, X. Zhang, J. C. Weiss, W. Nejdl, “Farf: A fair and adaptive random forests classifier,” in Advances in Knowledge Discovery and Data Mining, 2021, pp. 245–256, Springer International Publishing.
C. G. Harris, “Mitigating cognitive biases in machine learning algorithmsfor decision making,” in Companion Proceedings of the Web Conference 2020, New York, NY, USA, 2020, p. 775–781, Association for Computing Machinery.
M. A. U. Alam, “Ai-fairness towards activity recognition of older adults,” Pervasive Health: Pervasive Computing Technologies for Healthcare, pp. 108–117, 12 2020, doi: 10.1145/3448891.3448943.
Y. Zhang, A. Ramesh, “Learning fairness-aware relational structures,” Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 2543–2550, 8 2020, doi: 10.3233/FAIA200389.
S. Ahmed, S. A. Athyaab, S. A. Muqtadeer, “Attenuation of human bias in artificial intelligence: An exploratory approach,” in 2021 6th International Conference on Inventive Computation Technologies, 2021, pp. 557–563, IEEE.
Y. Hou, H. Hong, Z. Sun, D. Xu, Z. Zeng, “The control method of twin delayed deep deterministic policy gradient with rebirth mechanism to multi-dof manipulator,” Electronics (Switzerland), vol. 10, 4 2021, doi: 10.3390/electronics10070870.
K. Xivuri, H. Twinomurinzi, “A systematic review of fairness in artificial intelligence algorithms,” in Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, Cham, 2021, pp. 271–284, Springer International Publishing.
S. Sharma, Y. Zhang, J. M. Ríos Aliaga, D. Bouneffouf, V. Muthusamy, K. R. Varshney, “Data augmentation for discrimination prevention and bias disambiguation,” in Proceedings of the Conference on AI, Ethics, and Society, New York, NY, USA, 2020, p. 358–364, Association for Computing Machinery.
A. Pandey, A. Caliskan, “Disparate impact of artificial intelligence bias in ridehailing economy’s price discrimination algorithms,” in Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021, 2021, pp. 822–833, Association for Computing Machinery.
S. Udeshi, P. Arora, S. Chattopadhyay, “Automated directed fairness testing,” in Proceedings of the 33rd International Conference on Automated Software Engineering, New York, NY, USA, 2018, p. 98–108, Association for Computing Machinery.
N. V. Berkel, J. Goncalves, D. Hettiachchi, S. Wijenayake, R. M. Kelly, V. Kostakos, “Crowdsourcing perceptions of fair predictors for machine learning: A recidivism case study,” Proceedings of the Association for Computing Machinery on Human-Computer Interaction, vol. 3, 11 2019, doi: 10.1145/3359130.
W. Fleisher, “What’s fair about individual fairness?,” in Proceedings of the 2021 Conference on AI, Ethics, and Society, New York, NY, USA, 2021, p. 480–490, Association for Computing Machinery.
O. Aka, K. Burke, A. Bauerle, C. Greer, M. Mitchell, “Measuring model biases in the absence of ground truth,” in Proceedings of the 2021 Conference on AI, Ethics, and Society, New York, NY, USA, 2021, p. 327–335, Association for Computing Machinery.
S. Verma, J. Rubin, “Fairness definitions explained,” in Proceedings of the International Workshop on Software Fairness, New York, NY, USA, 2018, p. 1–7, Association for Computing Machinery.
D. Fan, Y. Wu, X. Li, “On the fairness of swarm learning in skin lesion classification,” Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning”, p. 120–129, 2021, doi: 10.1007/978-3-030-90874-4_12.
T. Speicher, H. Heidari, N. Grgic-Hlaca, K. P. Gummadi, A. Singla, A. Weller, M. B. Zafar, “A unified approach to quantifying algorithmic unfairness,” Proceedings of the 24th International Conference on Knowledge Discovery and Data Mining, jul 2018, doi: 10.1145/3219819.3220046.
L. Liang, D. E. Acuna, “Artificial mental phenomena: Psychophysics as a framework to detect perception biases in ai models,” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 403– 412, 1 2020, doi: 10.1145/3351095.3375623.
P. Saleiro, B. Kuester, A. Stevens, A. Anisfeld, L. Hinkson, J. London, R. Ghani, “Aequitas: A bias and fairness audit toolkit,” CoRR, vol. abs/1811.05577, 2018.
H. J. P. Weerts, M. Dudík, R. Edgar, A. Jalali, R. Lutz, M. Madaio, “Fairlearn: Assessing and improving fairness of AI systems,” Computing Research Repository, vol. abs/2303.16626, 2023, doi: 10.48550/arXiv.2303.16626.
J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg, F. Viégas, J. Wilson, “The what-if tool: Interactive probing of machine learning models,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, pp. 56–65, 2020, doi: 10.1109/TVCG.2019.2934619.
C. Wilson, A. Ghosh, S. Jiang, A. Mislove, L. Baker, J. Szary, K. Trindel, F. Polli, “Building and auditing fair algorithms: A case study in candidate screening,” in Proceedings of the 2021 Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2021, p. 666–677, Association for Computing Machinery.
A. Pérez-Suay, V. Laparra, G. Mateo-García, J. Muñoz- Marí, L. Gómez-Chova, G. Camps-Valls, “Fair kernel learning,” in Machine Learning and Knowledge Discovery in Databases, 2017, pp. 339–355, Springer International Publishing.
P. Smith, K. Ricanek, “Mitigating algorithmic bias: Evolving an augmentation policy that is non-biasing,” in 2020 IEEE Winter Applications of Computer Vision Workshops, 2020, pp. 90–97, IEEE.
M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, S. Venkatasubramanian, “Certifying and removing disparate impact,” in Proceedings of the 21th International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2015, p. 259–268, Association for Computing Machinery.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, “Generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 3, 06 2014, doi: 10.1145/3422622.
B. H. Zhang, B. Lemoine, M. Mitchell, “Mitigating unwanted biases with adversarial learning,” in Proceedings of the 2018 Conference on AI, Ethics, and Society, New York, NY, USA, 2018, p. 335–340, Association for Computing Machinery.
C. Weber, “Engineering bias in ai,” IEEE Pulse, vol. 10, pp. 15–17, 1 2019, doi: 10.1109/MPULS.2018.2885857.
N. T. Lee, “Detecting racial bias in algorithms and machine learning,” Journal of Information, Communication and Ethics in Society, vol. 16, pp. 252– 260, 11 2018, doi: 10.1108/JICES-06-2018-0056.
N. McDonald, S. Pan, “Intersectional ai: A study of how information science students think about ethics and their impact,” Proceedings of the Association for Computing Machinery on Human-Computer Interaction, vol. 4, 10 2020, doi: 10.1145/3415218.
P. K. Bharti, T. Ghosal, M. Agrawal, A. Ekbal, “How confident was your reviewer? estimating reviewer confidence from peer review texts,” in Document Analysis Systems - 15th International Workshop, 2022, La Rochelle, France, May 22-25, 2022, Proceedings, vol. 13237 of Lecture Notes in Computer Science, 2022, pp. 126–139, Springer.
Downloads
Published
-
Abstract739
-
PDF347






