Ridding Vehicles of AI-Bias
March 17, 2021—Artificial intelligence is slowly making its way into today’s vehicles, providing a plethora of new autonomous opportunities, but highlighting the dangers of biases held by their creators.
A recent webinar from Partners for Automated Vehicle Education tackled this exact topic and the precedent it could set if not addressed.
“These systems carry a lot of responsibility and the prospect that these systems could be biased is scary,” said Ed Niedermeyer, director of communications for PAVE and the moderator of the live event.
Bias in AVs
Nandita Mangal, platform owner of the HMI Vehicle Experience at Aptiv, said AI-bias in AVs can be dangerous, because the autonomous vehicle’s AI system is intended to recognize facial features and even emotions to alert the driver when they may have gotten distracted.
“Facial recognition is more than a system recognizing if the driver’s eyes are open or not, it goes deeper to understand the state of the driver,” she said.
Mangal said facial recognition needs to be designed to account for everyone, not just the developers. As much as race varies, so do cultural gestures, facial expressions, and so on.
Where does bias come from?
Mangal said there are certain biases that society has deemed “acceptable.”
For example, teenage drivers often have higher insurance rates due to their youthful inexperience, she said. But someone who lives in a heavy crime area, does not have higher insurance rates due to the risk, she explained.
AI-bias is already present in some ways, she continued, using the example of AI-populated job recommendations. When people are looking for new jobs, Mangal said women are more often advertised entry-level positions based on auto-populated suggestions, whereas men are shown the available openings for manager, CEO, and other high-ranking titles.
“At the end of the day, it’s not the machine-learning AI that is bias,” she said, “It’s the developers who designed it in a certain way, gave it certain annotations or labels to the data, and sometimes they don’t realize that bias existed, but it percolates into the system.”
Leslie Nooteboom, co-founder and chief product officer at Humanising Autonomy, a company that aims to put human behavior at the center of mobility ecosystems, said the first way to avoid bias is by having a diverse team.
“How can you have a global understanding of people, if you don’t have those people in your team?” he asked.
Mangal said a diverse team is important not only in demographic, but also across opinions. She said in order to fully address biases, teams need to include social scientists, researchers, and more to actively discuss the issues.
The other primary factor in countering bias is sharing data, Nooteboom said.
Data sharing is important because it helps you get perspective across the globe, Nooteboom said, not just in your backyard. He also emphasized the need for stakeholders to trust the data, rather than their own predetermined biases.
“Once you have all of these organizations, not just companies but governments sharing data, then you will have a far better understanding of how this technology could be rolled out,” he said.