fb-pixelBBC Russian
Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Skip to main content
IDEAS

How I accidentally became a fierce critic of AI

Once I was enamored with the promise of artificial intelligence. But then I noticed the huge holes in its view of the world.

Joy Buolamwini is founder of the Algorithmic Justice League and the author of "Unmasking AI: My Mission to Protect What is Human in a World of Machines."Naima Green

It was 2015, and despite Cambridge’s enticing fall weather, I’d spent most of my time that semester working on the final project for “Science Fabrication,” one of my first-year graduate courses at the MIT Media Lab. The class description grabbed me right from the start: Read science fiction and let the literature inspire you to create something entirely new, something you’ve always wanted to exist, even if it seemed impractical. Just make sure you can build it in six weeks. Classes like this were exactly what I loved most about the Media Lab — also known as the “Future Factory.” I saw it as a place of escape, a cocoon, for dreamers like me to slip into fantasy and just build cool technology. The real world and its messiness felt far away.

For this class project, beyond the science fiction we read that semester, I had other sources of inspiration that were closer to home. I’d always wanted to shape-shift my body like Ananse the spider, the clever trickster who appeared in stories my Ghanaian father and mother told me while I was growing up. But how could I quickly change my body into any shape I desired without making major breakthroughs in physics? Instead of changing my physical form, I decided I would try to change the reflection of it in a mirror.

Advertisement



I hacked together a prototype at my desk. With a mirror-like material called half-silvered glass placed over my laptop screen, I tapped on my well-worn keyboard, projecting different images onto a black background. I pulled up an image of Serena Williams, my favorite athlete. When I saw her eyes line up with mine in the mirror, it felt like wizardry. Serena’s lips and nose became mine. It was spellbinding.

After some experimentation, I had a proof of concept — evidence that my project was feasible — for what I called the Aspire Mirror. But to heighten the illusion, I wanted to get the image to follow my face when I moved.

Advertisement



I had been lost in my work for hours, energized by the progress I’d made, when I suddenly realized I was running late for a night out that my friend Cindy had managed to persuade me to join with several of our friends. Phase two of the Aspire Mirror would have to wait.

Apparently, a night out designed by MIT women was broken down into phases. The first phase was snacks and beautification. The second phase was partying in downtown Boston. As I rushed over to Cindy’s apartment, I tried to recall if the invite had asked guests to bring anything. I remembered the call to bring party clothes, and there was also something about masks. That made sense, I’d thought: It was Halloween, after all. I’d settled on my outfit for the night: a hot pink blazer, black dress pants, and a white costume mask I bought for the occasion.

When I got to her apartment, Cindy came to the door and gave me a warm hug.

“So glad you made it! Why are you carrying a mask?”

“I thought the invitation told us to bring Halloween masks?”

She broke out into a grin. “I meant beauty masks. But don’t worry, I have enough for everyone. I grabbed so many from my last trip to Korea.”

Advertisement



Chuckling at my mistake, I joined the other ladies in the makeshift relaxation space. Soft pillows, manicure sets, and ambient lighting accented my fellow revelers, who were reclining with soft beauty masks seeping into their faces. The masks didn’t fit my facial features, but at least I was out of the lab.

The next day, rejuvenated from my night with the girls, I bounded back to my office and switched on the fluorescent lights. This was one of the best parts of being a coder — and an artist: the thrill of being in the middle of creating something delightful. It’s like the anticipation of eating freshly baked bread after its aroma has filled the room. I sat at my desk and started phase two of the Aspire Mirror project: adding interactivity and movement tracking.

Because I wanted the digital filter to follow my face, I needed to set up a webcam and face-tracking software so that the mirror could “see” me. The webcam was easy. The face-tracking software was a struggle. Like many coders, I do not build everything from scratch — I rely on preexisting code, called software libraries, to create new systems. Think of it like a home improvement project. If I want to build a fence, I don’t need to personally chop down trees for my posts. I can go to the hardware store and buy precut planks of wood. Software libraries are lines of code written by other coders, like prefabricated building blocks, and they can be downloaded online by almost anyone.

Advertisement



For my Aspire Mirror, I tracked down an open-source face-tracking library for the project and integrated it into my code.

But even when I was looking straight into the camera, the system could not detect my face. That’s OK, I thought. Failure was part of the process. The next question to ask was, Could the system detect any face? I tested this by drawing on the palm of my hand two horizontal lines for eyes, an L for a nose, and a wide U for a smile. I held my hand in front of the camera. The software detected my elementary markings as a face!

At this point anything was up for grabs. I looked around my office and saw the white mask that I’d brought to Cindy’s the previous night. As I held it over my face, a box appeared on the laptop screen. The box signaled that my masked face was detected. I took the mask off, and as my dark-skinned human face came into view, the detection box disappeared. The software did not “see” me. A bit unsettled, I put the mask back over my face to finish testing the code.

Coding in whiteface was the last thing I expected to do when I came to MIT, but — for better or for worse — I had encountered what I now call the “coded gaze.”

Resistance to AI is acceptable

You may have heard of the male gaze, a concept developed by media scholars to describe how, in a patriarchal society, art, media, and other forms of representation are created with a male viewer in mind. The male gaze decides which subjects are desirable and worthy of attention, and it determines how they are to be judged. You may also be familiar with the white gaze, which similarly privileges the representation and stories of white Europeans and their descendants. My use of “coded gaze” is inspired by those terms. It describes the ways in which the priorities, preferences, and prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure. We can encode prejudice into technology even if it is not intentional.

Advertisement



I couldn’t help but think of Frantz Fanon’s “Black Skin, White Masks.” The book, written almost a half century before my experience, interrogates the complexities of conforming oneself — putting on a mask to fit the norms or expectations of a dominant culture. After striving for years to gain entrance to this epicenter of innovation, MIT, I was reminded that I was still an outsider. I left my office feeling invisible.

In the years since I first encountered the coded gaze, the promise of AI has only become grander: It will overcome human limitations, AI developers tell us, and generate great wealth.

But the deeper into my research I have gotten, the more I have come to understand how profound and sweeping the coded gaze’s impact is. It encompasses myriad ways technology can manifest harmful discrimination that expands beyond racism and sexism, including ableism, ageism, colorism, and more.

None of us can escape the impact of the coded gaze. Decisions that impact your daily life are increasingly being shaped by advancing technology that sits under the wide — often opaque — umbrella of artificial intelligence. I hope to show a path to urgent and growing conversations about the future of technology that need your voice, the voice of everyday people with lived experiences of what it means to be excluded — indeed, excoded — from systems not designed with you in mind. We need the voices of people like Robert Williams, who was wrongfully arrested in front of his children due to a false facial recognition match.

We need the voices of students, those struggling with e-proctoring software that flags them as cheaters.

We need the voices of migrants from Haiti and Africa who were caught in limbo when applying for asylum because the US government required the use of a mobile app that failed to verify their faces. We also need the voices of the unseen faces that do the ghost work, the data cleaning, the human translation that supports AI products. We need the voices of the parents whose children had intimate moments recorded by listening devices meant to provide hands-free convenience. We need to remember a Belgian man who committed suicide after interacting with a chatbot. According to his widow, he would still be here had the chatbot not encouraged him to end his life.

Most important, we need to be able to recognize that not building a tool or not collecting intrusive data is an option, and one that should be the first consideration. Do we need this AI system or this data in the first place? Or are we being encouraged to direct money at inadequate technical Band-Aids without addressing much larger systemic societal issues?

I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical. The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.

Joy Buolamwini, founder of the Algorithmic Justice League, is the author of the forthcoming book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines,” from which this essay has been adapted with permission by Penguin Random House.