Is AI Bias Just a Bug?

 Is AI Bias Just a Bug?

By Jack Harrington


Imagine a world where technology not only mirrors, but magnifies our imperfections, where the digital decisions that shape our lives are tinted by the same biases that cloud human judgment. This is not a dystopia, but our reality. Artificial intelligence (AI) takes on roles from hiring to healthcare with algorithms that amplify human bias. Leaving us with the question: is AI’s fairness a technical challenge or a moral dilemma?


Artificial Intelligence’s complexities are hidden in black box algorithms that prevent an open understanding of how the models work. Riadh Habash, a professor of engineering at the University of Ottawa, defines black box models as “models without significant parameters” that are “developed based on statistical models by quantifying historical data parameters to find an optimal pattern.” In simpler terms, they analyze patterns in data without providing users insight into their internal workings. With this, it becomes more evident that there exists no simple programming solution to correct the inherent bias.


The lack of significant parameters implies that humans cannot easily interpret the actions of the algorithm. Making it difficult to determine sources of bias utilized by the AI. By contrast, a deterministic, white box algorithm will produce the same results on a given input; creating a layer of transparency that the developers can use as a window to determine the correctness of the algorithm.


AI is further complicated with the datasets that are used to train them. According to Communications of the ACM, a publication covering developments in AI and machine learning, there exist numerous sources of error in data collection. These issues range from improper sampling and measurement inaccuracies to mislabeling and framing effect biases. In conjunction with these data errors, most of the data is derived from human input. Meaning that the algorithms naturally contain bias. Generating an amalgamation of patterned biases used to solve problems the algorithm is tasked with.


The lack of explainability and numerous sources of error when collecting data raise several concerns, especially when the stakes are high. Screening resumes is a recent application of artificial intelligence that has affected all of us. Every internship or job you apply to, the preliminary screening is performed by AI. Meticulously parsing your resume for patterns that are indicative of the “perfect candidate” and reporting on them. These “perfect candidate” patterns are largely unknown to applicants and are defined by the black box algorithm within the screening tool. This obscurity risks overlooking qualified candidates and perpetuating biases. In 2020, Ignacio Fernandez Cruz, an assistant professor of communication at Northwestern University, highlighted, “there is no industry standard or mandated guidance on the design and deployment of algorithmic hiring tools in recruiting.” 


Regulations were only enacted in New York City last year with the Automated Employment Decision Tool Law. It states that employers must inform candidates of the usage of AI screening tools in hiring, or risk a fine of $1,500 dollars and an audit of the system; however, this is likely an ineffective solution for controlling these tools. Third-party audits would be difficult on black box systems as they are highly complex and the fines are small.


Legislation is a crucial step in governing these algorithms in the future, but a greater focus must be placed on transparency and the data sources. In order to control the bias, the data sources and the algorithms must be exposed to the people they are used on. This will help to alleviate some of the moral and ethical concerns in the applications of artificial intelligence. 



Works Cited


Cruz, Ignacio Fernandez. “How Process Experts Enable and Constrain Fairness in AI-Driven Hiring.” International Journal of Communication, vol. 18, 2024. 

Ryan-Mosley, Tate. “Why Everyone Is Mad about New York’s AI Hiring Law.” MIT Technology Review, MIT Technology Review, 7 July 2023, www.technologyreview.com/2023/07/10/1076013/new-york-ai-hiring-law/. 

Srinivasan, Ramya, and Ajay Chander. “Biases in AI systems.” Communications of the ACM, vol. 64, no. 8, 26 July 2021, pp. 44–49, https://doi.org/10.1145/3464903. 

Y., Habash Riadh W. Sustainability and Health in Intelligent Buildings. Woodhead Publishing, 2022. 


Comments

Popular posts from this blog

Scaling the Potential of Vertical Farming Going into 2025 and Beyond

Knot Your Average Problem: How do Tongue Ties Impact Oral Myofunctional Health?

Crisis to Care: NJ’s Battle with Addiction and Homelessness