Dr. Rumman Chowdhury of Humane Intelligence | Telescope LENS Q&A

Telescope

In this Telescope LENS Q&A, we talk with Dr. Rumman Chowdhury, CEO and founder of Humane Intelligence Public Benefit Corporation and co-founder of the nonprofit Humane Intelligence. From her perspective as both a social scientist and AI ethics pioneer, Rumman shares insights on bridging the gap between technology development and what people actually want to see in the world—and how everyday voices can shape the AI systems being built around us.

Tell me a little bit about what your day to day is like and how that interacts with the work that Telescope does.

I help figure out how to get regular people's voices heard in how technology is being built. There's this massive disconnect between the technology being built and the things people want to see in the world, and I'm trying to bridge that gap.

Is there an idea you've come across recently that made you think: this could really change things for the better?

The most good is happening in healthcare. We're already living it—technologies like CRISPR, disease detection, preventative medicine. It's an era of unprecedented scientific advancement, and artificial intelligence is part of that.

But the next frontier where we're at a hard fork is education. There's enormous potential for AI in education, but I don't think the first iteration of how machine learning was used in education was done the right way. I hope we learn and make it better.

How does this perspective on technology help make the world more secure and prosperous?

That's complicated, because prosperity and security transcend technology. There's a lot of insecurity right now, and technology is actually a big part of it. Young people feel economically insecure about the job market. We're insecure about prices, about safety, about global instability. And AI is linked to many of these things—just about every frontier model company has pivoted to make and sell weapons.

But out of chaos breeds opportunity. It's an amazing time to be an entrepreneur, to make a stake of your own. Nothing is guaranteed and nothing is safe right now—but that also means if the world is chaos and you have infinite opportunities, that's maybe one way to look at it.

Is there someone whose work has inspired you recently in advancing security and prosperity?

I'm going to go outside tech. It's been inspirational to see how New York City has responded to ICE raids. New York is the most diverse place in the entire world, and New Yorkers have this way of pushing back that shows individual action still matters.

There's this woman known as the blue polka dot dress woman—she's all of our heroes. She's clearly on her way to work, sensible shoes, blue polka dot dress, bag on Canal Street. ICE is there, and she stops everything. She's in these people's faces, yelling at them, pointing her finger. She had emails to write, meetings to attend, but she stopped to make sure the right thing happened.

Nobody really knows who she is. She didn't do media interviews. And I heard she donated the dress to be auctioned for charity to help people detained by ICE. That's the most New York thing ever. It's about what actions you take to make the world better, even if it costs you something. That's inspirational to me.

What makes New York's sense of community unique?

There's a very strong civic infrastructure in New York—libraries, museums, community centers built by the robber barons but now part of the civic fabric. There's also a strong history of grassroots movements that are cultivated, funded, and resourced.

As a city of immigrants, you rely on your community. New York has so many third spaces—places you can go that don't need money or resources. Public transit plays a huge part. It's the great equalizer. It doesn't matter who you are or how rich you are—whether you get a seat only depends on when you got on the train. It makes people have to see and face each other in ways you lose when you're just going from your bubble at home to your bubble at work.

You get a slice of humanity—all the warts and flaws. Somebody having a mental health crisis, somebody who smells bad, but also somebody in a ridiculous Halloween costume or carrying the cutest dog you've ever seen. You get used to living amongst people with all their complexity.

When you hear the phrase "a future that works for us," what does that mean to you?

I gave a TED talk building on the concept of the right to repair—the idea that if we have a product, we should have the right to repair and fix it. I want to extend that to technology.

What we want in life should not be dictated by a company optimizing algorithms for profitability. It should be owned by us. We need more agency over the technology being used on us and by us. It's frustrating when you buy software and can't customize it to get what you want because everything is about force-feeding you what other people think is good for you while protecting their profit margins.

The way I've entered this space is by pioneering community-based red teaming—a method of testing and evaluating AI models where anybody can participate to ensure models are built more safely and securely. That's a radical notion when frontier model companies think they're the only ones smart enough to handle this.

What is red teaming?

Red teaming comes from the military and cybersecurity. You get external subject matter experts to break into your system to identify fundamental flaws. For AI, I replicate normal situations where everyday people can have bad outcomes.

I bring in people who are experts in the topic at hand—linguistic experts, cultural experts, teachers, students, scientists. Notice I don't say AI engineers. There's enough technological testing done by technologists. The purpose is to bring in impacted communities to do the testing.

The best thing about ChatGPT's popularization is that everybody now understands how AI works—they've interacted with it. I take that ease of interaction and create a testing mechanism out of it. It's about understanding how regular people, particularly those from underrepresented communities, might have negative outcomes when interacting with AI.

Is there a question about technology's role in society that you wish more people were asking?

What does "good" mean? When a new model comes out and media breathlessly reports how good it is—good based on what? These are measurements, and we have to ask who made them and what they're measuring.

Most benchmarks are around coding and programming, so unsurprisingly that's what these models are very good at now. Some are abstract intellectual exercises rather than pragmatic and useful things. When claims are being made about technology, we should ask: What are these claims based on? Who's doing this measurement? What's this test?

Is there a particular problem we could solve in the next 5-10 years thanks to emerging tech?

If you follow healthcare research, there are wild things like the ability to regrow teeth, reverse nearsightedness, genetically stop diseases. In our lifetimes, we'll see cures for many cancers. 

But we're going to quickly enter an uncomfortable era where we have to have conversations about eugenics. There are already startups built around optimizing IVF, picking the "right" babies. That's going to quickly go into creating the baby you want.

We need serious conversations about what that means. It traces back to a lack of community, a lack of appreciation and respect for people who are different, a lack of empathy for others in other situations.

Is there a particular partnership that has really helped accelerate your work?

We did an amazing project with the Singaporean government on multilingual and multicultural biases. They recruited linguists and sociologists from nine different countries across the ASEAN region. We crafted an evaluation exercise, worked with companies to test models, and tested for both cultural biases and linguistic safety.

What I loved about that partnership is it was hundreds of people engaging in this work. It was my ideal project—done in this perfectly inclusive, robust way. I wish more of that work could happen.

What about that partnership could be carried over to other contexts?

There's this belief that it's impossible or really hard to get public perspectives on things. I don't think it is. The kind of red teaming I do builds public feedback directly into the model. It demonstrates that regular people who aren't AI experts can give meaningful, useful feedback to model builders.

Companies pour hundreds of billions of dollars into crafting AI models. If we put a fraction of that into building evaluation and feedback mechanisms, we'd see astronomically better models. We'd see people wanting to adopt AI rather than being force-fed it. We'd see better outcomes.

We wrote a Red Teaming for Social Good playbook with UNESCO—specifically for civil society organizations who want to start adopting these practices without needing tons of technological infrastructure or resources.

Tell us about the startup you're building.

For the past few years, I've led the nonprofit Humane Intelligence doing testing and evaluation work. The biggest gap is that we need to do this work at the same quality but faster, and if we want companies and governments adopting this practice, it can't be as expensive or time-consuming.

The for-profit I'm building—also called Humane Intelligence—is a public benefit corporation. The goal is to build infrastructure for evaluation. It's conceptually easy to do the kind of testing I do, but our testing infrastructure isn't built around that. I'm creating more inclusive evaluation mechanisms as a core part of how technology is built.

Share this post
Contributors

Related Insights

David Winickoff of OECD | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with David Winickoff, Head of the Responsible Innovation Unit at the Organization for Economic Cooperation and Development (OECD). From his perspective bridging academic expertise with international policy development, David shares insights on how to realize the enormous potential of emerging technologies while managing risks—and why anticipating technological change, rather than reacting to it, is key to building a better future.

Dr. Rumman Chowdhury of Humane Intelligence | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with Dr. Rumman Chowdhury, CEO and founder of Humane Intelligence Public Benefit Corporation and co-founder of the nonprofit Humane Intelligence. From her perspective as both a social scientist and AI ethics pioneer, Rumman shares insights on bridging the gap between technology development and what people actually want to see in the world—and how everyday voices can shape the AI systems being built around us.

Erika Staël von Holstein of Re-Imagine Europa | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with Erika Staël von Holstein, Co-Founder and Chief Executive of Re-Imagine Europa. Drawing from her expertise in neuroscience and narrative strategy, Erika shares insights on how our thinking patterns shape reality, why we need to listen to those we disagree with, and how AI can help us build wiser technology by understanding the limits of our own narratives.

Michelle Giuda of the Krach Institute | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with Michelle Giuda, CEO of the Krach Institute for Tech Diplomacy at Purdue University. From her perspective as both a former student athlete and a leader in global technology policy, Michelle shares insights on how the United States and its allies can deliberately shape a future of freedom, prosperity, and security, powered by trusted technology and new models of collaboration.

Kristen Edgreen Kaufman of U.S. Council for International Business | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with Kristen Edgreen Kaufman, Senior Vice President of Global Impact Initiatives at the U.S. Council for International Business (USCIB). With extensive experience bridging global public policy and private sector interests, Kristen shares insights on fostering innovation ecosystems, building coalitions across political divides, and ensuring American competitiveness in emerging technologies.

Chris Massey of The Brds Nst | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with Chris Massey, founder and CEO of The Brds Nst. Drawing from his diverse experience across government and private sector, Chris shares insights on demystifying government for entrepreneurs, the transformative potential of AI across healthcare and elder care, and how innovation can serve both security and prosperity when we all "row in the same direction."

Heather Panton of Labrys Technologies | Telescope LENS Q&A

In this Telescope LENS Q&A, we talk with Heather Panton, General Counsel at Labrys Technologies. A former national security lawyer in the UK government who transitioned to the startup space two years ago, Heather shares insights on building compliance into innovation from the ground up, bridging the gap between private sector innovation and government needs, and creating technology that serves the public good.

Alasdair Maclay of GSG Impact | Telescope LENS Q&A

In this Telescope Lens Q&A, we talk with Alasdair Maclay, Managing Director at GSG Impact, a UK-based organization focused on building impact economies.

Alliot Cole of Octopus Ventures | Telescope LENS 2025 Visionaries Series Q&A

In this Telescope Lens Q&A, we talk with Alliot Cole, Venture Partner at Octopus Ventures, a pan-European venture fund based in London.

Steve Clemons of The National Interest & Widehall | Telescope LENS 2025 Visionaries Series Q&A

In this Telescope Lens Q&A, we talk with Steve Clemons, editor-at-large of The National Interest and CEO of Widehall, a global public affairs company

Abhilash Mishra of Equitech Futures | Telescope Lens Q&A

In this Telescope Lens Q&A, we talk with Abhilash Mishra, founder of Equitech Futures, an international fellowship and innovation lab supporting technologists working at the intersection of emerging technology and the public good.