Coded Bias: Is Artificial Intelligence Reliable? - LAVNCH [CODE]

Coded Bias: Is Artificial Intelligence Reliable?

Artificial Intelligence (AI) has been a topic in the technology world for quite a while. We see AI used to describe everything from video conferencing camera tracking to electronic systems automation, from facial recognition deployed with digital signage content to predictive search engine results and social media posts.

With AI being deployed in nearly every aspect of our lives, it brings up an important question.

Is Artificial Intelligence Bias-Free and Reliable?

Coded Bias Documentary PosterThis is the very question that the documentary Coded Bias endeavors to tackle, and their conclusions may or may not surprise you.

First off, let me say that the film is worth 90 minutes of your time, if just for some of the book and organization titles referenced alone. I mean where else will you learn about Artificial Unintelligence, Weapons of Math Destruction, Twitter and Teargas, Big Brother Watch, and The Algorithmic Justice League?

There are a lot of themes in the documentary revolving around social justice issues, surveillance culture, privacy law, authoritarianism, and corporate greed that all relate to the way that today’s AI is being deployed. I’ll leave those for you to explore as you watch Coded Bias for yourself; however I wanted to tackle the technology itself, to see how good (or bad) it actually is.

You may have guessed already that the conclusion drawn is that AI is not all it is said to be or should be—and it exposes four major reasons why artificial intelligence just isn’t that intelligent after all.

Access

A good standard for the value of a technology is accessibility. Is the technology deployed equally across demographics?

Dr. Joy Buolamwini, formerly of MIT’s Media Lab, found quickly while working on an extended reality (XR) mirror project, that AI, specifically in facial recognition, would not recognize her melanin rich, female face.

Artificial Intelligence Bias in Facial Recognition

Dr. Joy Buolamwini wore a white porcelain mask to detect bias in facial recognition.

As an experiment, she covered her own face with a white, porcelain theatre mask, and voilà, it started tracking her. After some investigation, she found that almost every single facial recognition platform was biased towards white, male faces—and that the training sets used to teach the program featured those faces disproportionately.

It inspired her to start diving deeper into bias in AI and to eventually start speaking on the subject and founding The Algorithmic Justice League.

Her concern is that gains made in the equal rights movement could be erased in the future under the perceived “neutrality” of math.

Bias in Artificial Intelligence

The lack of accessibility is really a result of a greater problem in AI, and that is bias.

Buolamwini says that with AI, “you can’t separate the social from the technical,” and asserts that “AI is based on data, and data is a reflection of our history, so the past dwells within our algorithms.”

Cathy O’Neil, Ph.D. and author of Weapons of Math Destruction makes a similar assessment. She says that algorithms are predictions about the future based on data from the past and that machine learning is really just a scoring system that predicts what you’re about to do.

It only makes sense that if you’re using past data to make predictions about the future, then the future will look a lot like the past. Models like this have a hard time with the punctuated deviation that happens from moments of change.

It’s hard to code the ethos, zeitgeist, and aspirational change into algorithms that would be necessary to create desired outcomes rather than recreate historic ones. As O’Neil stated, they own the code, the code makes decisions, and there is no appeal process.

Black Box

Even if you could eliminate all the bias initially coded into AI, there is a deeper problem. As the software continues to analyze data and make decisions, at some point, the decision-making process becomes a mystery. AI then operates in a black box.

According to Zeynep Tufekci, Ph.D. and author of Twitter and Teargas, “We don’t understand how it works, it has errors that we don’t understand, and the scary part is that because it’s machine learning, it’s a black box to even the programmers.”

Widespread machine learning with relation to AI is still relevantly new, as in the past, there wasn’t enough data being collected to create relevant associations. Now, with smart phones and social media, this is no longer the case.

Accuracy in Artificial Intelligence

Let’s say that all the problems above were solved. AI was bias-free, accessible equally to everyone, and the process by which it learned and evolved was completely transparent. We’d still have a major issue—AI just isn’t that accurate yet.

Silkie Carlo is with Big Brother Watch UK, an organization dedicated to “Watching the Watchers.” In London, facial recognition cameras are being used in the public square to assess people who walk by and determine if they may be on a watchlist of some sort.

The cameras anonymously collect facial geometry, which Carlo claims to be a violation of due process. In fact, one man sees the cameras and covers his face, only to instantly be approached by police.

Regardless of the legality, the bigger issue is that Big Brother Watch has found that the facial recognition matches against the watchlist are wildly inaccurate 98% of the time.

I think Meredith Broussard, author of Artificial Unintelligence, sums up the situation rather well. She says that what we have today is narrow AI and “Narrow AI is just math”.

People embed their own biases into technology and the machines are replicating the world as it exists. “They aren’t making ethical decisions, they’re making mathematical decisions,” she stated.

True AI would need a layer of self-awareness and discernment that computers just don’t have yet. People have biases, but we are also self-aware. We can reflect. We can look at outcomes as desirable or undesirable and change our processes. By knowing our biases, we can examine our decisions and try to correct for them. This is all a layer of intelligence that machines lack.

Coded Bias and Artifiical Intelligence

All images courtesy of Coded Bias.

If AI were just being deployed for Netflix recommendations and Instagram explore pages, the problems above would be at most, annoying.

However, the stakes are much higher. AI is used for hiring decisions, for criminal sentencing and parole recommendations, for credit worthiness, for housing, for insurance rates, for access to medical care and more.

Artificial intelligence started at Dartmouth in 1954—it is effectively 68 years old, and it still has quite a way to go.

AI and facial recognition aren’t innately bad, and in fact, they hold a lot of promise for creating a better future. Coded Bias shows us the current flaws and issues that need to be addressed to fulfill that promise. I encourage you to hit play and watch it for yourself.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top
Top