Lessons from the Convergent Leader launch in Paris
6 min read & 43 min watch
| Published on
Executive summary
In this guide, we share insights and videos from the launch of Catalyst and Coqual’s Convergent Leader research, hosted by AXA in Paris. At the event, researchers and practitioners discussed why AI transformation is fundamentally a leadership challenge and how the Convergent Leader model demonstrates that organisations that lead with trust, inclusion, and shared ownership of change consistently outperform those that rely on technical capability alone.
- Video 1 explores why leadership must evolve in the age of AI and introduces the Convergent Leadership model.
- Video 2 brings the model to life through practitioner perspectives — what leaders need to consider, what gets in the way, and what actually helps.
The guide concludes with five recommended actions that create the conditions for responsible, people‑centred AI — which enables organisations to move fast on AI without leaving their people behind.
How to cite: Smith, E. (2026). Lessons from the Convergent Leader launch in Paris. Catalyst.
Why leadership needs to change — and what “convergent” means
AI is moving faster than most organisations’ leadership models. While investment has focused on capability and tools, far less attention has been paid to how leaders create trust, inclusion, and ownership alongside AI.
In this opening conversation, Emilia Yu, Executive Director, Coqual’s Global Lab, and Ellie Smith, PhD, Director of Research for EMEA, Catalyst, outline:
- The promise and peril of AI.
- Challenges organisations are running into.
- Why existing models are no longer sufficient.
- What it means to be a Convergent Leader.
Emilia spotlights a question many organisations are grappling with: “How do we make the most of this AI investment?” She reflects on how the vastness of that question has left many leaders feeling paralysed in their decision-making, yet unable to pause their rollout. “Unfortunately, organisations cannot wait and see, and employees cannot simply hang in limbo forever.”
The Convergent Leader model reframes AI as a human leadership challenge. As Ellie explains, “organisations led by convergent leaders, guided by their decisions and the practices they put in place, consistently outperformed non-convergent leaders.” The data paint a picture that organisational leaders cannot rely only on technical capability — they must simultaneously navigate disruption, inclusion, and long‑term capability building.
Ellie describes the environment Convergent Leaders create as one rooted in “trust, inclusivity, and — perhaps most importantly — shared ownership of change,” an environment that translates directly into stronger business and people outcomes.
This sets up a critical question: What does this look like in practice, inside real organisations?
Bringing convergent leadership to life
The practitioner panel picks up where the model leaves off: grounding Convergent Leadership in lived organisational experience.
Panellists reflect on:
- Where AI strategies succeed or stall.
- How fear, trust, and inclusion show up in real teams.
- What leaders often underestimate when implementing AI.
A consistent theme emerges: organisations are moving quickly on AI but unevenly on leadership strategies. Where leaders fail to bring people with them, adoption slows, inequities widen, and long-term risks grow.
Panellists emphasise that, for European organisations in particular, AI leadership cannot be separated from the regulatory and cultural context. As Birgit Neu, Fellow in Coqual’s Global Lab, notes, regulatory fluency “can’t just sit with the legal and compliance teams.” Instead, effective leaders are “much closer to the detail — not only understanding the issues that regulation presents, but also really understanding the opportunities it creates for their businesses and their AI strategies.” In practice, this means leadership teams need to engage deeply with regulation as a strategic input, not a constraint managed elsewhere.
That challenge is compounded by Europe’s diversity. Birgit highlights that leadership across the region requires “a very different level of cultural dexterity,” shaped by an understanding of political, economic, and historical differences, as well as varying expectations around authority and consultation. While the underlying technology may be consistent, she stresses that “the way that you lead people through change has to have local relevance.” In other words, Convergent Leadership demands a balance between global AI ambition and locally resonant leadership behaviours.
The panel also raises critical questions about how organisations define and measure success in an AI-enabled future. Ruksana Bhaijee, Global Head of DEI at the Financial Times, reflects on growing concern about how value is created and assessed, asking: “How is success measured? What does success look like?” Beyond traditional financial metrics, she challenges leaders to consider people performance and broader measures of impact, particularly in a tough climate where responsible AI is increasingly under scrutiny.
Without inclusive, context-aware leadership — grounded in trust, cultural understanding, and clear measures of value — organisations risk leaving people behind when they accelerate technology adoption.
The five recommended actions for organisations
The practitioner discussion reinforces the five actions recommended in the Convergent Leader organisational playbook for leaders who want to create the conditions for successful AI transformation.
- Create clarity through ongoing, candid communication: Leaders must keep employees in the loop, helping them understand how AI connects to organisational strategy, what is changing, and why. Just as importantly, clarity requires candour — being open when there is uncertainty rather than leaving people to fill the gaps themselves.
- Put practical protections in place for everyday AI use: This goes beyond a responsible AI policy that lives on an intranet. It means clear, practical guardrails that employees understand and can apply in their day‑to‑day use of AI — creating confidence about what is acceptable, expected, and safe.
- Design AI transformation as a partnership with employees: This frames employees not as passive users of AI tools, but as valued co‑creators of the transformation. Leaders who treat people as disposable in the face of automation erode trust; those who invite partnership build shared ownership of change.
- Redesign early-career roles to sustain the future talent pipeline: This emerged as a particularly urgent concern. As Melinda Marshall, Fellow in Coqual’s Global Lab notes, “We heard this in every interview we conducted.” With AI taking on more tasks typically executed by junior employees, organisations are questioning how to justify hiring early-career talent — yet the pipeline depends on it. This requires rethinking the purpose of entry-level roles and intentionally redesigning learning and progression pathways, rather than quietly hollowing them out.
- Build feedback, learning, and measurement systems to track impact and equity: These remain underdeveloped in many organisations. The mechanisms to track how AI is reshaping work (and its impact on people) are often not yet in place. Without data, leaders lack visibility into where outcomes may be uneven, inequitable, or unintentionally harmful.
Taken together, these five actions are not a checklist, but a leadership system. They are the practical foundations that enable organisations to move fast on AI without leaving trust, inclusion, and long‑term capability behind. As Melinda put it: “These five ways are really what create the conditions where people feel safe enough to learn, experiment, and take risks.”