Wheels of Morality: AI and Biases
Wheels of Morality: AI and Biases
By: Megha Jain
Every few years, I face a moral dilemma placed by one of my teachers—a scenario in which a speeding driver (suppose myself) faces two choices. I can either continue straight and potentially harm a helpless baby crawling in their path or swerve left, endangering a group of elderly grandmothers peacefully walking together. The fundamental question remains: Which life should the driver prioritize? My answers to this dilemma have varied over the years, influenced by different lines of reasoning that are usually influenced by my biases. For instance, a baby has a longer future ahead of itself, but five lives are more important than one life. The fluctuation of my answers, I believe, is very normal in human nature, as these hypothetical decisions lack real-time consequences. However, what happens when these biases of individuals come into play in artificial intelligence and affect real-life situations?
Well, there is no better way to answer this question than thinking of self-reliant cars. Over the past few years, driverless cars have been stealing headlines as the things of the future. These vehicles promise to reduce human errors and enhance safety within society. However, a 2022 report by Insurer Axa revealed a surprising statistic: autonomous cars were responsible for 50% more accidents than human-driven vehicles. Notably, these accidents were more prevalent in the USA. In my view, the reason behind this trend appears to be rooted in the programmers who develop the software systems driving these vehicles. Their biases control how the machine behaves.
In a 2014 lab experiment conducted by MIT Review to address a similar moral dilemma, as mentioned earlier, researchers saw 40 million decisions being reported on this dilemma. They discovered that preferences varied across countries based on cultural values and economic status. For instance, Japan and China were less likely to spare the young over the elderly due to their profound respect for older generations. So what about the babies? Well, other countries saw the dilemma differently, as seen from the image. The reason why this dilemma is consistently brought up is because when people envision self-driving cars, there exists a significant misconception. Many believe that artificial intelligence possesses independent decision-making abilities. However, this is not the case. The AI that is actually doing the so-called driving is just software carrying out its task. So, these biases that people who are responsible for creating the software are inadvertently putting biases into the system. These subjective biases are not a fair way to train these driverless cars. The reason is that moral and ethical issues for humans are very complicated. Having AI rely on "its own moral and ethical grounds" is unthinkable as of now.
Furthermore, something else I find really interesting in this aspect of the discussion is the widespread belief that AI will save our world. In my perspective, saving the world entails eliminating existing societal problems and creating an equal playing field for everyone. Yet, AI, like humans, struggles to distinguish between ethical and moral issues effectively as it has no mind of its own, in my view. Research from the Georgia Institute of Technology found that state-of-the-art detection systems, such as the sensors and cameras used in self-driving cars, better detect people with lighter skin tones. The authors note that that makes them less likely to spot black people and stop before crashing into them. This study raises a critical question: If AI cannot discern these differences, is it worth it for such technology to roam freely on our streets? This issue highlights that AI, for now, does not have the potential to save the world; instead, it complicates our existing problems, underscoring the challenges we face in integrating technology ethically into our everyday lives.
I think providing a solution to this problem is easier said than done. However, I believe that we should train not only the algorithms but also provide training to educate individuals about their biases and how they will affect their technology.
Citations
Comments
Post a Comment