How AI developers can assure algorithmic fairness

Khensani Xivuri, Hosanna Twinomurinzi

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team.

Original languageEnglish
Article number27
JournalDiscover Artificial Intelligence
Volume3
Issue number1
DOIs
Publication statusPublished - Dec 2023

Keywords

  • AI developers
  • Algorithms
  • Artificial intelligence (AI)
  • Domination-free development environment
  • Fairness
  • Jurgen Habermas
  • Process model

ASJC Scopus subject areas

  • Artificial Intelligence
  • Information Systems
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'How AI developers can assure algorithmic fairness'. Together they form a unique fingerprint.

Cite this