5 minute read

A Conversation with Dr. Timnit Gebru

I reached out to renowned AI ethicist Dr. Timnit Gebru and she responded!

Gebru was named one of Time Magazine’s people of the year in 2022. She immigrated to the United States from Ethiopia when she was fifteen years old. She earned her PhD from Stanford in computer vision and worked for Google from 2018 to 2020 where she co-led the AI ethics team. In 2020 she was fired from Google after publishing a paper on the dangers and biases of large language models. PostGoogle, Gebru formed her own non-profit group called the Distributed AI Research Institute whose interdisciplinary approach involves bringing together underrepresented voices to be heard in the conversation around AI. Her work has gained worldwide recognition by many scholars in academia and major publications like Nature, Fortune, Washington Post and Time Magaizine.

The following is from a conversation I had with Gebru, edited for conciseness and clarity. The questions I asked her are bolded and her answers are all of the text that follows.

Q: What is the importance of ethical AI?

This word is being thrown around for a lot of different things. Everybody’s excited about it, everybody’s talking about it, and it seems like everybody needs to do it. A lot of the ways in which people are working in this space have a lot of problems, and many of them are not necessarily specific to AI: they might just be related to exploiting workers, and this might just be another avenue of exploiting workers through surveillance. So a lot of the issues that we see are not necessarily new, it’s just another way in which they can be perpetuated by this technology. So initially when I was working in this space, I got into technology, because I just liked building things. That was really the only thing I was concerned about. But then I started to see what I was participating in, and how could I just be building stuff for [exploting others]? And so that’s how I started thinking about trying to steer this field in a bit of a different direction. Or at least, figure out what I want to build and not build. I think that in any field or technology, not just in AI, it’s really important to figure out what’s okay, what’s not okay. And many other disciplines have done that. And many times, it happens after the fact. Discipline is proliferated, and horrible stuff has happened, and then at some point, by force, some sort of laws are instituted. So, to me, that’s the importance of ethical AI.

Q: What is your approach to ethical AI?

I see it as a very broad field. There are people working around different aspects of these things. Some people are more interested in analyzing the error rates of a particular model, as it impacts different groups of people. So for example, people might use some sort of model to determine whether you should get their loan or not. And there are groups of people who analyze whether the algorithm that’s used for that is fair, or not fair. Whereas for me, I have to go from the very beginning and say, should this tool even exist? We shouldn’t assume that a particular tool has to exist first, and then analyze whether it’s fair enough. So I think I like to have a broader view of what it means to build this kind of stuff and how we analyze the harms and the benefits.

Q: How does your organization DAIR address ethical concerns that aren’t considered by tech companies like Google, Microsoft, and Open AI?

I started thinking about a pretty broad view of what the harms and benefits are.

DAIR stands for the Distributed AI Research Institute. We’re a small, small, institute. We don’t have the ten billion dollars, like Microsoft announced they would give Open AI.We don’t even have a fraction of that. Our goal is to see if we can use this technology to actually help people in marginalized groups fight back: give them more resources, more data, more tools. Why would large tech companies do this once again? And then the third thing is to figure out what is an alternative we could imagine in terms of the future? Do we have to have a future where a couple of multinational corporations have these large huge models, determined by large huge datasets from everybody, stealing works from artists, and making money for themselves but not other people? Is this the world we want? Not me! If that’s the case then how do we work towards the world we want rather than just complaining about what is happening right now. That’s very hard for us because, when you see an issue and you don’t think people are really seeing the issue, you want to make those people see the issue and yell about it over and over again but that doesn’t create space for actually building your alternative version of the future. What happens is you end up always being stuck in cleaning up mode and you’re not really investing in people who are doing something different. We don’t have billions of dollars, but we can do some things grassroots. We can try to do what we can with what we can. I hope that there will be many other smaller organizations like that. That’s the other issue in silicon valley; people want monopoly of everything. That’s the venture capitalist model.You have to grow, you have to have a monopoly. I’m thinking, what if there’s like a federation of startups that don’t want to grow so much, don’t want to have a monopoly. Each of them want to maybe serve a specific community, specific kind of thing, and then work together. Why does it have to be this business that is controlling everybody everywhere? So these are some of the ways we are trying to counter what we see as harmful in the space.

Q: What people are you trying to bring together with DAIR? How does this allow more voices to be heard in the conversation around AI?

We are an interdisciplinary team which means that we have people who are doing this work from different perspectives– not just engineers, computer scientists, sociologists and AI researchers, but also people who are not researchers who have a good understanding of the harms they face. One of our fellows, Adrian, was a delivery driver at Amazon and was helping organize workers there. She has a lot of information and understanding of how Amazon treats workers like machines and she is working on a project to quantify the level of wage theft that is perpetuated by using some of these automated systems (wage theft per worker). A lot of people don’t think about surveillance, a lot of times when people try to talk to you about surveillance systems they say, “oh you know, we need it for security” and what is security for? It’s to prevent crimes. They don’t really think about the industrial scale crimes that are being perpetuated by these huge multinational corporations. So she was estimating that the wage theft is somewhere between $6,000 to $18,000 per worker.