
Telescope recently brought together experts in New York City for a candid discussion about artificial intelligence's impact on finance and the workforce. The conversation revealed nuanced perspectives on the tremendous opportunities and serious challenges ahead, painting a picture of a transformation already underway that demands both optimism and urgent action.
The evening began with a striking observation from moderator Steve Clemons about the prevailing mood in New York: "I'm so impressed with how much optimism I've heard.” However, Clemons suggested this surface-level positivity masks deeper concerns. "If you get beneath the surface of New York Climate Week or U.N. Climate Week and you get out into the country, you talk to the screen actors, you talk to people who work in the auto industry... there's anxiety,” he noted. “They don't know how they're going to fit in."
This disconnect between elite optimism and grassroots anxiety sets the stage for AI’s most pressing challenge: ensuring that its transformative power benefits everyone, not at the cost of widespread displacement and social disruption.
Dr. Rumman Chowdhury, CEO of Humane Intelligence and former director of Machine Learning Ethics at Twitter, identified what may be the most fundamental problem in how we discuss AI: the systematic removal of human agency from the conversation. She calls this "moral outsourcing."
"If you look at any media article, it says, 'AI is smarter than a doctor, it will replace you.' The structure of that sentence, you notice how human beings are completely disempowered. It's acting as if this AI is autonomously making decisions. And it's not."
This framing problem has real consequences. When we talk about AI as an autonomous force rather than a tool shaped by human decisions, we abdicate responsibility for its outcomes. Chowdhury's solution involves fundamentally changing the development process: "At Humane Intelligence, this is what we work on. We test and evaluate AI systems by involving the people who will be impacted by them. And in doing so, we actually shape them more positively."
The agency problem extends beyond development to regulation. Chowdhury pushed back against the common refrain that regulation stifles innovation: "There's this phrase I use where I say brakes help you drive faster. When I work with companies, including Fortune 100 companies wanting to utilize AI, one of the biggest blockers to at-scale adoption is not actually knowing what good or bad even means."
Perhaps the most sobering aspect of the AI transformation is its disproportionate impact on young people. Chowdhury painted a stark picture: "Young people are petrified. They're petrified because they feel like they have no clear path on what they should do and how they should be doing it, the way a lot of us were when we were younger, you know, to be an engineer, to be a lawyer and doctor, that job will always exist."
This generational anxiety points to a deeper structural problem that Nathan Sheets, Chief Economist at Citi, identified as the "hollowing out" effect. While AI may complement the work of experienced professionals, it threatens to eliminate the entry-level positions where careers traditionally begin. "If AI is hollowing out all the jobs... but complementing the work of seniors, where is the next generation of seniors going to come from?" Sheets asked.
The implications are profound. "Sometimes it's a lost generation, and I think we have a risk of a lost generation," Sheets observed. This isn't just about retraining programs or new educational curricula. It's about preserving the pathways through which expertise and institutional knowledge are passed from one generation to the next.
The solution, according to Chowdhury, requires rethinking education itself: "What is the purpose of education? It's actually not to feed you nuggets of information that you digest and spit back. The purpose of education is not to teach you what to think. It's to teach you how to think."
Despite these challenges, the economic potential of AI remains enormous. Sheets outlined a compelling case based on recent Citi research, projecting that "before five years, we can enter into a decade where US growth is as much as a percentage point higher" due to AI productivity gains. This echoes the internet boom of the late 90s and early 2000s: "We saw rapid growth, wonderful wage gains, low inflation, and tax revenues surged into the U.S. Treasury."
Within financial services, adoption is following a predictable hierarchy. "You've got some very flexible firms with hedge funds... then you've got the asset management industry... adopting AI and thinking hard about how I use AI to manage money better... And then you've got regulated institutions that are more graduated," Sheets observed.
This cautious approach by regulated institutions isn't necessarily problematic. The regulatory environment, rather than stifling innovation, may actually enable more thoughtful and sustainable adoption by forcing companies to articulate clear standards for success and risk management.
The discussion drew illuminating parallels to the Gilded Age, though with crucial differences. Chowdhury noted her frustration with today's wealth accumulation: "One of my frustrations with the current generation of massive wealth accumulators is that very few of them have been spending their money."
In contrast, she observed that the robber barons, however problematic their methods, created lasting institutions through their philanthropy: "Andrew Carnegie took his money and sent it to people in Fiji to make a library that still stands today." The implication is that today's AI-driven wealth creation needs to be coupled with more immediate and systematic investment in public goods and social infrastructure.
Eric Braverman of Telescope proposed an innovative approach to workforce disruption: creating a "sovereign talent fund" that treats human talent as a national resource comparable to oil. This concept would provide a revolving fund to help people adapt and transition as AI transforms the job market, moving beyond traditional retraining programs to create a more systematic approach to human capital development.
This systemic thinking is essential because, as Chowdhury pointed out, the transformation is already happening gradually: "There's never going to be the moment you're going to wake up one day and suddenly realize the world is wildly different. Can we remember the day we woke up and said, ‘Oh my God, the world is full of iPhones?’"
The challenge is maintaining agency over this gradual process. As Telescope focuses on finding innovative young founders, developing market mechanisms to address workforce disruption, and investing in companies that exemplify responsible innovation, the broader question remains: can we develop the institutional wisdom and social conscience to guide AI's development faster than we did with previous technological revolutions?
The conversation revealed that while AI's economic potential is enormous, realizing that potential in a way that benefits society broadly will require deliberate action and systematic thinking. Wealth creation is inevitable. The question is whether we can ensure it serves human flourishing rather than simply concentrating power in fewer hands. The answer may well determine whether the AI transformation becomes a story of shared prosperity or deepening inequality.


The UK’s Advanced Research & Invention Agency (ARIA) has selected Telescope and the Ditchley Foundation to convene a global network of experts in support of ARIA’s Forecasting Tipping Points Programme, pioneering next-generation early warning systems to detect climate tipping points before they escalate into crises.