Coded Bias is a documentary film that investigates discrimination in artificial intelligence. The documentary follows MIT media lab researcher Joy Buolamwini, a computer scientist who finds a major flaw in facial recognition technology.
Technology’s purpose has always been to support human beings, and remember, humans created technology. We’ve used technology to help improve safety regulations, to travel faster from one place to another, to connect with people globally through a worldwide network, to venture into space, and much, much more. You’d be surprised at the rate almost every machine around you is constantly learning, adapting, recognizing, and remembering your every written or spoken thought. We’ve come to a point where Artificial Intelligence (AI) and machine learning have been able to learn and function on their own. But maybe you knew that, and you were okay with that.
The documentary begins with Joy Buolamwini’s experience at Media Lab. She takes a class called ‘Science Fabrication’ which encourages students to imagine ideas and innovate based on science-fiction concepts. She planned to make what she called an ‘Aspire Mirror’. As explained in the documentary, the idea of the Aspire Mirror was to match a person’s face and replace that with the face of an animal, or someone who inspired that person. Using a camera and computer vision software, the idea was that the technology should be able to track Buolamwini’s face. It didn’t work too well for her until she put on a white mask, which then helped her face being detected.
Predicting individual behavior using biometric facial recognition. Screenshot from Coded Bias.
After a lot of thinking and experimentation, we note that machines are trained with sets of examples and what we want them to learn. All of machine learning is based on historical data, and the results are computed based on actions taken, or decisions made. It means anything you feed a machine, it becomes a part of the machine’s memory, just like with us humans and our brains.
So imagine feeding an artificial intelligence our history in the past 200 years. Can you imagine the outcome? The problem is, since there have been age-old issues of racism, wars, societal divides, and much more, machines have learnt that the choices which survived were the better choices, and not necessarily the correct choices. That is the flaw of a machine is a black box that we cannot explore or fix, because these are algorithms that we are not fully in control of. People have pretty much embedded their biases into technology that’s been picking up from them.
“AI is based on data, and data is a reflection of our history. So the past dwells within our algorithms” – Joy Buolamwini
The judgmental Tay created by Microsoft on Twitter displayed enough of what AI is capable of. The bot learned so much on the internet that it began to make offensive racist, sexist, and inflammatory tweets laced in bitter social commentary. Within 16 hours of its release, Tay had to be taken down saying it suffered a “coordinated attack by a subset of people”, or so was reported.
We’re living in a society where artificial intelligence governs liberty and individuals. We’re constantly being discriminated against, even by machines. The documentary goes ahead to expose how many major companies like Amazon, Facebook, IBM, Microsoft and more had facial recognition that performed better on male faces as opposed to females. Companies like Amazon have seen racial bias in their employment algorithm, and selected more white males over any other races and genders.
You may also like: NBA – Standing Against Racism
We see facial recognition in China and how it is required for every move you make, from connecting to the internet, to catching a train. In the UK, CCTV surveillance went so far as to suspect, and make assumptions on potential suspects using biometric facial recognition. This would lead to straight-up questioning from the local police immediately on the street itself. Anyone who tried to hide their faces from the surveillance vans were fined!
There’s also an example of facial recognition becoming a part of US’ Atlanta Plaza Towers security measure.This building in Brownsville, New York has a predominant black and brown population. Residents in the building expressed their right to not being tracked through such systems. Political scientist and author Virginia Eubanks makes a very interesting point about how the most surveillance-focused tools are first tested on poor working communities first, and if they work in such environments where there’s a general low expectation of people rights being respected, that they will get ported out into other communities.
The documentary also discusses State surveillance versus corporate surveillance through biometric facial recognition. State surveillance, as it operates in China, is being used to log and regulate citizens by categorizing and accessing their data. In countries like the US, surveillance is being used for commercial applications based on earning revenue.
You’re quite often asked for Permissions in the terms and conditions section on several websites. This allows access to information that you’ve built your profile with, and more online activity. This includes your pictures, your age, your interest, your influences, and so much more that you don’t know about. We are not even aware of the impact that people who are in control of Big Data have on our lives.
We’ll leave the rest for you to watch!
Check out CODED BIAS on Netflix.
Buolamwini founded the Algorithmic Justice League to raise awareness on the impacts of Artificial Intelligence. They also believe that racial justice requires algorithmic justice as well, seeing how we’re headed into the future with our unregulated, rapid technological advancements.
We can’t always trust we know what authority has the power to do. We have to be wary of every system that’s being implemented around us. There are loopholes with the system based in AI because there are no proper laws governing how this technology can be used, and no regulations in place to determine its extent.
What do you think will happen?