What is ‘Responsible AI’? Panelists Weigh In (Blog Post)
Modern AI reflects human biases—we can build it differently.
“With AI comes tremendous power and potential but also… associated risks,” said Yara Elias, AI Risk Leader, EY.
Elias sat on a panel at last month’s Catalyst Honours Conference & Dinner in a discussion moderated by Rubiena Duarte, Vice President of Global Diversity and Inclusion, Procore. The session, “Advancing Representation and Inclusion Through Responsible AI,” brought together perspectives of tech, AI ethics, and diversity, equity, and inclusion (DEI) professionals.
Panelists Karlyn Percil, Chief Equity Officer, KDPM Equity Institute, and Anna Jahn, Director of Public Policy and Learning, Mila (Quebec AI Institute), discussed the human biases embedded in AI and how we can build the technology differently.
“I wanted to address the elephant in the AI room: current AI is built on the white racial frame. We have to talk about the dominant culture and going beyond DEI to look at human equity,” said Percil.
Jahn added: “This is not a technology that fell from the sky… We built it. We have the agency to build it differently.”
To disrupt patterns of systemic prejudice, Percil recommended that organizations focus on inclusive AI strategies and bringing in a wide range of voices within and outside their companies. “If we take the time to redirect our attention, organizations can start building responsible AI by starting where you are… What works within your company? Does your DEI strategy include AI?” she said.
“Claim your spot in the conversation,” Jahn encouraged the audience—coders and programmers should not be the only people deciding how AI algorithms function or which datasets they use.
Elias agreed with Jahn, saying, “We need people with backgrounds in ethics, legal and compliance, philosophy, security, technology, to join us at the table. It takes a village to build AI systems.”
Key takeaways from the session include:
- Substantial barriers exist for women and underrepresented groups when entering and succeeding in technology and AI fields. Companies must be proactive about dismantling obstacles through updated hiring practices, mentorship programs, inclusive team cultures, and more.
- Modern AI systems often reflect and amplify existing human biases, leading to unfair and unethical results. We have the responsibility and power to build these systems differently in fair and inclusive ways from the ground up.
- To create responsible AI, organizations need to take a systemic approach that considers ethics, compliance, security, and philosophy alongside technical expertise. Representatives from all backgrounds and identities should have a seat at the table.
- Rather than solely focusing on performance benchmarks, responsible AI also requires us to prioritize bias mitigation and fairness across gender, race, and other identity factors. Some loss of efficiency may be a necessary tradeoff.
- As the global workplace begins to embrace AI technology, the time is now to redirect your attention to rebuilding its systems in a way that reflects your company’s values and a vision that prioritizes human equity and social good.
As the AI tool used to help write this summary blog wrote, waxing poetic: “The future remains unwritten—it is up to us to write it inclusively.”
Get the Latest Updates.
Sign up to receive reminders for upcoming webinars, roundtables, and conferences.