Who should own AI?

17 min read
| Published onExecutive summary
Many organisations frame complex issues such as culture or ethics as “everyone’s responsibility”. However, without deliberate accountability mechanisms, responsibility can become diffuse rather than effective.
As artificial intelligence (AI) increasingly shapes how organisations hire, evaluate performance, allocate work, and make decisions, senior leaders face a growing governance challenge: who is accountable when AI-enabled decisions affect people’s outcomes? Evidence from how organisations already govern inclusive culture offers a clear answer — and a practical template.
Drawn from a survey of 2,891 European leaders,1 our research shows that accountability for complex, people‑shaping outcomes is already distributed across leadership levels and functions and reinforced through performance systems. However, distributed responsibility without deliberate design creates risk. The lesson for AI is clear: effective governance depends less on naming an owner and more on designing accountability into decision rights, performance systems, and managerial authority, where everyday decisions are made.
This research is written for CHROs, Chief Inclusion Officers, CTOs, Chief Data and AI leaders, and executive teams responsible for designing the systems that govern how AI is used across the organisation.
How to cite: Smith, E. (2026). Who should own AI? Catalyst.