Dr Rumman Chowdhury is a visionary at the heart of ethical AI, shaping the future where technology and humanity meet. As CEO and co-founder of Humane Intelligence, she champions fairness, transparency, and accountability in AI, ensuring technology serves people – not the other way around. Furthermore she is a pioneer in the field of applied algorithmic ethics, creating cutting-edge socio-technical solutions for ethical, explainable and transparent AI.
Her journey is as inspiring as her work. With a PhD in Political Science from UC San Diego and degrees from MIT and Columbia, she has built a career dedicated to challenging biases in AI. At Accenture, she pioneered the Fairness Tool, setting new industry standards for responsible AI. Previously, Rumman was the Director of the META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform.
Recognised as one of TIME’s 100 most influential people in AI, she continues to push boundaries. In 2024, she appeared in What’s Next? The Future with Bill Gates on Netflix, sharing her insights on how AI must evolve with ethical purpose. Rumman is not just shaping AI – she’s ensuring it remains a force for good in an increasingly complex world. We’re super proud to have her as one of our keynote speakers at the upcoming League of Leading Ladies Conference 2025 in Interlaken.
She says most AI models are a “fundamentally broken model” that never gives something back to the people that created the data in the first place. “This data has been stolen from us!” If you ever wanted to learn how the business in AI is running behind the scenes, this is the interview you need to read – and share with everyone you know!
Her call to action is crystal clear: “Sadly, we are moving more and more towards this world of supporting oligarchs than we are of supporting a diverse and healthy ecosystem of businesses. And that is what is needed.”
Ladies Drive: You were a member of the U.S. Department of Homeland Security. How did you end up on this AI safety and security board?
Rumman Chowdhury: One of the most interesting things in the Biden administration was its focus on AI and social impact, not just building the technology but understanding what the technology meant for Americans. And the Department of Homeland Security reached out because they were building an advisory group on critical infrastructure. That was incredibly important. Well, it was actually disbanded the day after Trump took over – this is not an administration that is interested in or prioritising impact on society, civil rights, social impact. We actually got an email on the 21st January 2025 about it. But while it lasted, it was quite an impactful board full of some of the biggest names. Sam Altman and Dario Amodei or Lisa Su, Jensen Huang, CEO of Nvidia, Alex Givens, Nicole Turner-Lee, Maya Wiley, myself. There was a lot of civil society represented, as well as government organisations. We had Wes Moore, who was the governor of Maryland, as well as the mayor of the city of Seattle. So it was an interesting mix of people.
How was it to work with all these people?
It was interesting. The funny thing is, I don’t know if you’ve ever done this before. When you work in corporate environments, sometimes people measure how expensive a meeting is by how much people’s hourly rate is. So I would do that, and I think I came up with a number in, like, the millions, if you think about how much these people are worth and how much of an hour of their time is worth in terms of their wealth. I think it was literally the most expensive meeting I’ve ever been to in my life.
You seem worried about the Trump administration …
It’s concerning. We are not entering a particularly happy time for not just U.S. politics and U.S. government, but also for the development in the field of AI. I think the next four years is going to feel like 10 steps back. And frankly, the last four years felt like, you know, all these steps forward. And it’s unfortunate to see these attempts to roll it all back. Last fall, President Biden had written an executive order on artificial intelligence that was focused on measuring and identifying the impacts of these AI systems. Trump’s first – one of his first acts in office – was to repeal that executive order. And a lot of that rolled back initiatives to engage civil society, to create safeguards and guardrails on AI’s impact. And really, the focus is just on, you know, channelling money into these already big corporations. Look at Stargate, which is a $500 billion investment in AI that is basically going to two companies, SoftBank and OpenAI. And that’s it. So they have a lot of money that’s just going to them, instead of being distributed across a wide range of organisations. What’s worrisome is that this is going to reduce competitiveness in the market. You know, it’s going to reduce the number of perspectives in the room. And it’s going to stifle the ability for organisations like mine to be engaged in, you know, meaningfully engaged as we had been in understanding and addressing impact of AI.
We will see and we’ll closely observe what is happening in the US. But let’s focus for now on your career. In 2024 you’ve been invited to be featured in Bill Gate’s Netflix show “What’s next”. How did that happen?
You know, what’s funny is I had actually forgotten that I had filmed that. It’s usually about a year from filming to something launching. And honestly, you know, I travel so much. I do so many panels and engagements. I had actually forgotten about it. And then all of a sudden my email, my phone blows up with all of these friends saying like, I just saw you on the Netflix special. I’m like, what Netflix special? And then they’re like, oh, the Bill Gates one.
(Laughing)
Actually, there’s a whole segment we filmed that didn’t end up getting included. But the nice thing was, I got a nice handwritten thank you letter from Bill Gates.
Can you just share in a nutshell for the readers what you shared with Bill Gates in that Netflix episode?
I was talking a lot about how AI systems are built and whether or not they’ve really thought through the broader implications. So we are now in a world in which these AI systems are sort of built into so many different things, seen and unseen. I think people don’t realise how much AI is built into impactful systems that we don’t interact with every day. And yet nobody has asked us whether we wanted them there. Nobody has asked us for our perspectives and our opinions. And we are expected to trust CEOs of companies to make the right decisions for hundreds of millions of people. And the thing is, it’s not just about speculation. The reality is, we know that there are massive failures of these systems. And unsurprisingly, where these systems fail are with people and communities that tend to not be represented traditionally. I’ll give you an example: in the U.S., there were medical algorithms that were systematically denying black patients kidneys because of built-in racism in their models!
In actually, in a couple of cases in Europe, algorithms that were used to identify welfare fraud were targeting immigrants because they were picked up as fraudulent. Again, because of historically discriminatory practices against particular communities of immigrants. Like hiring algorithms that are discriminating against women. And when we say discrimination, it’s not that the algorithm is being discriminatory on purpose. It’s because it comes from the data. The data of the world reflects all of our issues and problems. So if we just blindly put that into a machine, the machine will tell us back, you know, maybe a story we don’t want to hear.
So, it’s like you have kids, and they are mirroring you, and you see yourself reflected in that kid?
It’s exactly that, exactly that.
But speaking of trust, we know that trust is like – some call it the new currency of this world because you only buy from people you trust, right? You only vote for someone you trust. So when we speak about trust, who do you trust in this field of AI?
Oh. I don’t know if I trust specific individuals. That’s just not my style. There’s this really great article by one of my favorite feminist writers, Rebecca Solnit, and it’s called “When the Hero is the Problem.” And it’s a very short journalistic article. You can find it online. Like, it’s not this very famous piece or anything. But it means a lot to me because she writes about hero culture and how people are misled to think that change is driven by individuals when actually change is driven by the collective. And she gives story after story after story, but those stories don’t make it into movies, right? We want to think Greta Thunberg is the person who’s going to save the planet. It’s actually not true. We are the ones that are going to save the planet. And it’s the same with AI. There is no person – frankly, there shouldn’t be a person – I shouldn’t even be that person, that people are looking to to say, “Oh, she’s going to fix the problem.”
No, actually, it’s a collective action. And all of this change is made collectively! For better or for worse. I believe that change is driven collectively. So tapping into that collective consciousness, driving that collective consciousness in a positive way is what my belief system is.
But when you say that, that it’s a collective thing, that means that every individual like you and me, that we need to take responsibility.
I think part of what we are owed as a society is agency. I agree with you. People deserve the right to ownership over the models that impact their lives. I gave a TED Talk last year on the right to repair, and that’s kind of where that idea comes from. If these things are in your life, if they’re impacting you, then you deserve to have a say in how they work. And we don’t have that today. We don’t have that feedback loop because it’s been denied us in tech.
The right to repair – can you share some stories about that with us?
Sure. For farmers, tractors are part of their livelihood. The tractor company John Deere introduced computerised tractors, and they had full control over whether or not the system worked. So any time if your tractor broke down, you couldn’t fix it yourself. You had to call the company. But for a farmer, first of all, a lot of them are very self-sufficient. They’re used to fixing their own equipment. And second, they have no time to waste. So they learned to do it themselves. They actually became hackers. Technically, that’s illegal. You are violating your terms of service. So the right to repair would give you the right to do that. What is very interesting is in the last few months, there was a big lawsuit about it, but the FTC, the United States Federal Trade Commission in the US, actually ruled in favour of these farmers, meaning that this idea that technological systems that are imposed into your life that impact you, you should have some sort of a right to fix it, own it, tweak it, do something to it. You are not just meant to be the subject, but you are correct.
If we are going to say that there’s a collective responsibility to solve a problem, then we need to collectively give people the tools and education to solve that problem.
Because it’s so easy to say, this guy or that guy will fix it for me.
Exactly! And I think, I hope that that’s what Humane Intelligence, my nonprofit, is teaching people. So we do these evaluation exercises called red teaming. And what we do is we give people access to these models. We demonstrate that their lived experience, their expertise is valuable in identifying issues and fixing issues in models.
The first step is actually making people feel like they have something to say. We have just been taught and told that all these Silicon Valley people are smarter than us. And well, these AI systems are even smarter than everybody else. So who are you to say anything? And that’s what we dare to do. We dare to say, actually you are somebody who should say something. So if we do want a positive technological future, then the answer is to give people more ownership and agency and create this feedback loop with companies.
But I think a lot of people are kind of overwhelmed by the world, by its complexity, by AI in general. So how can we make sure people take this agency over their own life?
It’s just not a reflex we have. So while I was at Twitter, we were tasked with this idea that Jack Dorsey had called “algorithmic choice”. He had this idea that people could pick their own algorithm. And what my team was tasked to do was actually to understand whether or not people even understood what algorithmic choice meant. If I were to give you algorithms, would you go pick one? How would you make that decision? The answer is people didn’t know. They had no idea. And we realised that it’s not just education. It actually is re-imagining our relationship with technology. Social media is such a perfect example. We have become passive consumers. We sit there and literally consume what an algorithm feeds us and feeds us. And we’ve never thought about, oh, I should be in charge. The average person simply consumes what they’re fed, right? And we don’t do that with anything else! We don’t just eat what’s put in front of us. We don’t just wear whatever clothes we see in the beginning of the store. In all other sectors of our life we like to have a choice. So I think a big part of that is first of all, giving people the realisation that you can and should have choice.
And the second part is building the infrastructure for them to have the choice. And actually, most AI models are just trained on what’s publicly available on the internet. And that’s actually leading to a lot of biases. What I want people to know is: we don’t have to sit and wait for someone to fix these biases for us. We can do something about it.
Speaking about biases – I think one of the big issues of our time is that you will find an expert who says this is true, and you’ll find another one who will say, that’s false.
So you’re touching on something that’s very important to me, actually. One is that there has been an assault on science. And truly, it’s been an assault on scientific thinking and reasoning for over a decade. And the thing is, this isn’t just the Trump administration. I mean, from my perspective, it started in the late 2000s. And maybe that’s just when I noticed it. One is this mistrust of science or a lack of understanding of science, hand in hand with a lack of trust in institutions. And we saw it a lot during COVID, for example, where people said: I don’t trust the FDA for instance, so a governmental institution. And this is actually fairly new language. I guess we’ve not had such high mistrust in these institutions ever before. And I’m not here to say, oh, we should definitely blindly trust all big institutions. It’s just remarkable that there are so many people who question these big institutions, some rightly, some wrongly. Science means a very specific thing. It’s a common shared language where we can actually demonstrate the correctness of our findings based on the quality of how we designed it.
But in science, all is about falsification.
Exactly. So people think science is just about proving a point. Actually, science is about these very boring terms called robustness and validity. Robustness actually means how well does this perform in multiple different settings. Validity is literally how valid is the design of what you’ve built. So this ability to interrogate is actually a skill that we should all have. It doesn’t mean we all have to have PhDs in the field. It actually means that we should be able to understand and ask questions. But also, we deserve the right to have those questions answered. People don’t know how to ask the questions. But also, in some cases, we are denied that information. And AI is such a perfect example.
All these companies keep talking about how AI is smarter and better. Based on what? If you look at their metrics, look at their measurements, and you interrogate what they’re measuring and how they’re measuring it, you’ll see how it’s very, very narrow.
So they say things like, AI is going to replace doctors. Well, doctors do more than just answer rote questions about medical conditions. Just because an AI system performs better on medical entrance exams than a human it does not actually mean AI can replace a doctor! And they use these very non-holistic ways of measuring.
Can AI be more holistic than a human being?
No, absolutely not. It cannot. You are now touching on another one of my favourite topics, which is the concept of intelligence. And I am actually going to be teaching a seminar at Harvard on the concept of intelligence, the social, economic, and political structure of intelligence, because intelligence is not an objective measurement. The idea of intelligence and how intelligence manifests in the world is very much intertwined and tied up with social biases, political biases, racism, sexism. It is a value that we have made up as a society.
And unsurprisingly, this idea of even measuring intelligence came about in the first Industrial Revolution. So people call this AI revolution the fourth Industrial Revolution. The first Industrial Revolution was actually when we decided, so we need to measure intelligence.
I’m not surprised to see that whenever we have this new technology, we start to think about what is our value as humans, what is our intelligence, and what are we bringing to society and how might this technology threaten it. But it’s interesting to think through how we have used intelligence or this idea of how we can measure intelligence to, for example, deny women right to access to higher education. So some of the earlier eugenicists would use skulls, and they would take a man’s skull and a woman’s skull, and they would put little beads in it, and they would count how many would go in it. And they would say, oh, well, the woman’s skull contains fewer beads. And therefore, women’s brains are smaller than men’s, and therefore, women are not as smart as men. But it is not scientific because the size of your brain has literally nothing to do with your intelligence. Otherwise, the elephant would be the smartest being in the world. But they would use it to deny women access to college. So this idea of intelligence is very fraught. Another example: Open AI has defined intelligence as “the automation of all tasks of economic value.” So they are explicitly saying, you are only intelligent if you are contributing economically, if what you have built is a commercially viable product!
In other words, if you are disabled, if you are ill, if you are taking care of a young person, if you are a homemaker, if you just do something for fun and not profit, doesn’t count. No value to that.
And that, to me, is very, very problematic. To take it a step further, over the holidays, there was this article. And it was kind of hidden because it was sort of like Christmas, New Year’s time. Open AI and Microsoft have decided that artificial general intelligence, so called AGI, which to many people means AI, that is, human level capability, is achieved once they have made $100 billion. So they have literally tied it to a financial value. So there is no measurement. How silly is that? It’s the stupidest actual thing. And it is only constructed to make them wealthy and powerful.
What can we do to change that? And how can we educate ourselves and our kids and the next generation to always dare to ask questions?
I think one of the hardest things in times like this is to live your values. It’s very, very hard. Without getting into detail, there have definitely been moments in my life where I almost regretted not doing certain things just because of my value system. I could be a lot wealthier. I could be a lot more powerful. There are many things I could have that I do not have now. But I have never regretted it because it never works out in a way that I would be happy, that I would be OK in my life. So I think number one is just understand and live your values.
Number two is ask lots of questions. These CEOs are no different from any other CEO. They are not better or smarter or care more about the world. They certainly don’t. So who are the people in the organisations? And what are the incentives that motivate them? If you run a for-profit company, in general, your motivation is to make profit. We learned that with Twitter. We learned this term fiduciary responsibility, a fiduciary duty, which specifically means that a company is there to serve its shareholders. So even though Twitter had this absolutely abysmal person wanting to take over the company, and everybody knew the impact it would have on society, the leadership felt, and they decided, that that could not be a consideration because he was offering more money than the company was worth. Therefore, they had to say yes. Fundamentally, that was the decision to sell to Elon Musk. It was because he offered the most money. But that is what a for-profit is for. So when a for-profit says that they are there for social value, I think there are some companies that have navigated it quite well. There is a phrase that goes around that there are no ethical billionaires. And I tend to believe that.
So you don’t believe that for-profit and kindness go well together?
No, actually, I think it does! I think one can build a healthy and successful business that is for-profit, that is beneficial and kind. Once you are at the stage where you are dealing with billions of dollars, you did not get there without exploitation. You did not get there without cutting some corners – it is too much wealth and power. I also just need to ask a very basic question. Why would anybody need that much money? There is something pathological about people who need to acquire wealth upon wealth. I think people have a hard time understanding what a billion dollars is, right? It’s an impossibility to physically spend the wealth these men have. And yet, they seek more. Do you think they should share it more? I mean, I think that a lot of these individuals need to be taxed very aggressively, absolutely. Because again, they don’t need this money. This money sits. One of my favourite books about the current economic situation, which is a little bit older – one is Piketty’s Capital. If we think about John D. Rockefeller, and Andrew Carnegie, their wealth acquisition was actually quite different. Because they would build physical factories. They’d make products. In some sense, the old form of industry had some sort of trickle-down effect, because you had to have a factory, and a worker, and a product. But Piketty’s book is about how actually most modern, wealthy people accumulate capital. They buy land. They buy buildings. They buy things that they just hold onto, because they naturally accumulate value. Stocks are a good example, right? But when you do that, you are not producing anything that goes back to society, even if it’s income that you paid somebody to do something. A lot of very wealthy people, they just buy land, and they sit on it. And they just watch it depreciate in value. You’ve actually not contributed anything back. You just sat there on a piece of land. And that’s fine on a small scale. But at the scale on which these people are operating, wealth begets wealth. And it gets worse, and worse, and worse, because they are actually taking away things.
So how does this relate to technology and AI?
One of my other favourite books is Shoshana Zuboff’s “The Age of Surveillance Capitalism”. It’s about the economic construct of tech, which is the same thing. It is about acquiring and never giving back. So all of your data, all of your information, everything that is available about you is soaked up and used, and then is built into an AI model. And that AI model is sold to somebody else. But you, as the person who created the data, never receive anything back. If anything, you’re the one being sold to. And all of these generative AI models are identical. Literally, they took all the data off the internet that none of us ever expected was going to be – they never asked our permission. They build these fancy models, and they say, look how great we are. Look how rich we’re going to be, because you’re going to pay me now. You’re going to pay me for the right to use the data, use the information from the data that I stole! And there is not this reciprocity here. They’ve not given anything back. I think they like to think they do. But there’s a fundamentally broken model. They give you crumbs, and they get the whole cake. It’s not a linear relationship anymore between how hard you work and how much money you make. And tech is a perfect example of that.
Sadly, we are moving more and more towards this world of supporting oligarchs than we are of supporting a diverse and healthy ecosystem of businesses. And that is what is needed.

So what happens when Open AI owns everything?
There is no market competition. So if we want startups to thrive, then there needs to be a path ahead other than acquisition. Just about every startup that is made today, very few, if any of them, have the goal of going IPO and becoming the next big whatever. Most of them just want to get acquired because you are told that is the smart business decision. Which, yes, from a financial perspective, that’s great. But if you think about what does that mean? That just means that these three to four companies get more and more and more powerful, acquire more and more and more, and again, stifle market competition. So I’ll also add that artificial intelligence and technology in general actually naturally wants to be a monopoly if you think about it from an economics perspective. The reason it actually wants to be a monopoly, especially artificial intelligence, number one, has actually a pretty big barrier to entry. It is actually incredibly expensive and difficult to make your own AI model, your large language model. So most people are just building on top of some AI model that exists, and they’re fine tuning it. The second thing is that most technology works better when everything is in the same system. So think about Apple products. But you see how that’s naturally a monopoly. I’m just thinking through as an economist. Markets that are naturally monopolies have been decentralised to promote more competition and diversity of actors, but that has to be an intentional act, and none of that has happened in AI.
So is there a way out or a way to create this kindness economy for profit that you and I imagine that we could or should have to have a more prosperous future for many of us?
Well, that’s actually been coming up quite a bit lately because people are more and more noticing the sort of AI is leading to this centralisation of a couple of companies that run everything. So one, just to get a bit technical about it, one is actually open protocols. So if we think about Wi-Fi, that is a great example. Wi-Fi is actually an open protocol. The reason why I can take my laptop and it will work in Switzerland is actually like this magical feat of different companies and governments agreeing to have a shared open protocol. We don’t think about who our Wi-Fi router company is because there’s a million of them. But the reason there’s a million of them and we don’t just have to buy one Wi-Fi protocol the way we do with Apple products is because it is an open system and anybody can – so that encouraged market competition. So what does it mean to make the bare bones of these things free and accessible to all so that anybody can build? This is just so far beyond anything we’ve ever had before: any kind of phone could connect to my iPad. And I could buy any kind of home system and I could manage it from my Apple phone or whatever. These are things to think about.
The second is just investment: investing in companies that are viable competitors and also ensuring that they are kept on the path of growth versus the path of acquisition. And I think that’s very important, is the founder mindset to remain independent.
And third is, I think, getting away from this idea of hyper growth. I think we have forgotten that it is totally fine to live a happy life that is balanced, do good work, and build good things without having to be one of the 1 % of the 1 %.
And I think there is this weird dream that so many people have of being the next super gazillionaire that actually, I think, most of us can and should aim for creating something really good and valuable for the world that also benefits us financially. One can have a really good life. I think there’s this idea that you are denying yourself or if you do things that are socially impactful, you can’t make money. All of that’s completely untrue. And that would actually mean that the world is broken. I think we’d have very deeper problems if that were fundamentally true. But it’s not true. I think it’s just our perspective. I’m seeing a shift in values in a direction that I’m not overly thrilled about, frankly.
Thank you so much for giving us these thoughtful and deep insights. In this spring issue we share wisdom about the effect of emotional language and how important it is to become conscious of our emotions. What do you think in the context of AI about that?
So what I think about a lot in my field is: it is helpful to think about where things are not working, what could be improved. But we also have to have what I call a “positive imaginary”, a way of describing the future we want. And if we cannot describe the future we want, then we cannot get anywhere. And that has actually, I think, been a really big problem and issue in the responsible tech and ethical tech space is, it has become very ingrained in pointing out problems, but not offering solutions. But the first thing in offering a solution is saying: how is this getting me towards the vision that I have?