The Trust Dividend in AI: Building Consumer Confidence
As we stand on the cusp of 2025, the intersection of trust and Artificial Intelligence (AI) has never been more critical. The final episode of
Centre for Trustworthy Technology
As Artificial Intelligence (AI) and other emerging technologies become increasingly integrated into our shared reality, they have the dual potential to either empower individuals at every level of society or exacerbate inequality by concentrating power and resources. Shaping technology development to align with a human-centered future requires the diligent practice of innovative and ethical approaches to trust. This episode of Trustworthy Tech Dialogues invites Professor Renée Cummings to examine how inclusive data structures, local governance approaches, and multi-stakeholder accountability models can create a more equitable technological future.
In Conversation:
Professor Cummings currently serves as the first Data Activist in residence at the University of Virginia (UVA) School of Data Science and co-directs the Public Interest Technology University Network at UVA. She is an ethical counsel and AI expert on multiple intergovernmental alliances, academic institutes, and think tanks. Through her research and advocacy, she brings a nuanced social understanding of AI systems, and a vision of a trustworthy technology future made possible through ethical resilience.
The Data Dilemma
This episode begins with a crucial discourse on the foundational role of inclusive data practices in building, imagining, and scaling an inclusive future of AI. While data has emerged as one of the most powerful drivers of societal and economic transformation, it also remains incredibly vulnerable in terms of safety and security. This duality of power and vulnerability is the ‘data dilemma’ which precedes the urgency of the pursuit of data equity.
Professor Cummings emphasizes that “data equity is good economics,” underscoring its importance in creating inclusive systems that harness data’s potential for innovation and opportunity. Drawing on her work with the World Economic Forum’s Data Equity Council, she highlights the need for a nuanced understanding of data provenance and governance to critically think about the trajectory of data, how data is being manipulated, and innovative approaches to solutions. The World Economic Forum’s Data Equity Council Report incorporates Indigenous approaches to equity and sovereignty to build a framework of inquiry around data equity as the “shared responsibility for fair data practices that respect and promote human rights, opportunity, and dignity.” Inclusive data practices foster dynamic, comprehensive solutions, urging a sophisticated approach to the political, social, and economic contexts surrounding data.
In light of landmark global AI governance initiatives such as the 2024 Global Digital Compact, Professor Cummings asserts that effective governance must be grounded in local realities. Engaging city councils, universities, and communities ensures that governance systems reflect the lived experiences of those most affected by AI. These bottom-up approaches foster equitable decision-making, build critical AI and data literacy, and lead to outcomes that are both reflective and just.
Professor Cummings celebrates the rise of communities of practice that unite ethicists, technologists, and policymakers. Looking ahead, she stresses the importance of ensuring these practices extend from “Main Street to the C-suite.” As she poignantly asks:
“How do we lift up communities and members of the public? How do we provide the right kind of public information and education? And how do we include these voices in public oversight, which is now even more critical when we’re thinking about the advances being made with data and AI?”
Ethical Resilience and Meaningful Participation
Ethical resilience is a proactive practice. Professionals across disciplines—data scientists, policymakers, and technologists— are tasked with cultivating ethical resilience in the way they think about AI and data and the active practice of anticipating and addressing emerging risks. While risk management is a top priority for many stakeholders today, establishing sufficient systems to conduct scenario mapping, red teaming, and futuristic risk management skills are still in development.
Professor Cummings also highlights the profound power of algorithms to create or deny legacies. This potential creation or denial of opportunity, resources, and access is the distinctive divide between empowerment and disenfranchisement. Creating and ensuring open access to AI and data calls for interdisciplinary participation through accountable and explainable methods. This includes approaches like impact assessments and auditability within the AI space.
Call to Action: Preparing for 2025 and Beyond
The future of trustworthy AI depends on the commitments we make today to embed ethical resilience into every stage of technology design, development, and deployment. Professor Cummings calls on all stakeholders to embrace this transformative era of AI innovation by taking collective action—ensuring we maximize AI’s potential for societal good while addressing its inherent risks.
As we approach 2025, the imperative is clear: we must prioritize the creation of ethically resilient and inclusive pathways for participation. From diverse data frameworks to localized, participatory governance models, these systems can serve as the bedrock of a future where technology is not only advanced but also trustworthy and equitable.
As Artificial Intelligence (AI) and other emerging technologies become increasingly integrated into our shared reality, they have the dual potential to either empower individuals at every level of society or exacerbate inequality by concentrating power and resources. Shaping technology development to align with a human-centered future requires the diligent practice of innovative and ethical approaches to trust. This episode of Trustworthy Tech Dialogues invites Professor Renée Cummings to examine how inclusive data structures, local governance approaches, and multi-stakeholder accountability models can create a more equitable technological future.
In Conversation:
Professor Cummings currently serves as the first Data Activist in residence at the University of Virginia (UVA) School of Data Science and co-directs the Public Interest Technology University Network at UVA. She is an ethical counsel and AI expert on multiple intergovernmental alliances, academic institutes, and think tanks. Through her research and advocacy, she brings a nuanced social understanding of AI systems, and a vision of a trustworthy technology future made possible through ethical resilience.
The Data Dilemma
This episode begins with a crucial discourse on the foundational role of inclusive data practices in building, imagining, and scaling an inclusive future of AI. While data has emerged as one of the most powerful drivers of societal and economic transformation, it also remains incredibly vulnerable in terms of safety and security. This duality of power and vulnerability is the ‘data dilemma’ which precedes the urgency of the pursuit of data equity.
Professor Cummings emphasizes that “data equity is good economics,” underscoring its importance in creating inclusive systems that harness data’s potential for innovation and opportunity. Drawing on her work with the World Economic Forum’s Data Equity Council, she highlights the need for a nuanced understanding of data provenance and governance to critically think about the trajectory of data, how data is being manipulated, and innovative approaches to solutions. The World Economic Forum’s Data Equity Council Report incorporates Indigenous approaches to equity and sovereignty to build a framework of inquiry around data equity as the “shared responsibility for fair data practices that respect and promote human rights, opportunity, and dignity.” Inclusive data practices foster dynamic, comprehensive solutions, urging a sophisticated approach to the political, social, and economic contexts surrounding data.
In light of landmark global AI governance initiatives such as the 2024 Global Digital Compact, Professor Cummings asserts that effective governance must be grounded in local realities. Engaging city councils, universities, and communities ensures that governance systems reflect the lived experiences of those most affected by AI. These bottom-up approaches foster equitable decision-making, build critical AI and data literacy, and lead to outcomes that are both reflective and just.
Professor Cummings celebrates the rise of communities of practice that unite ethicists, technologists, and policymakers. Looking ahead, she stresses the importance of ensuring these practices extend from “Main Street to the C-suite.” As she poignantly asks:
“How do we lift up communities and members of the public? How do we provide the right kind of public information and education? And how do we include these voices in public oversight, which is now even more critical when we’re thinking about the advances being made with data and AI?”
Ethical Resilience and Meaningful Participation
Ethical resilience is a proactive practice. Professionals across disciplines—data scientists, policymakers, and technologists— are tasked with cultivating ethical resilience in the way they think about AI and data and the active practice of anticipating and addressing emerging risks. While risk management is a top priority for many stakeholders today, establishing sufficient systems to conduct scenario mapping, red teaming, and futuristic risk management skills are still in development.
Professor Cummings also highlights the profound power of algorithms to create or deny legacies. This potential creation or denial of opportunity, resources, and access is the distinctive divide between empowerment and disenfranchisement. Creating and ensuring open access to AI and data calls for interdisciplinary participation through accountable and explainable methods. This includes approaches like impact assessments and auditability within the AI space.
Call to Action: Preparing for 2025 and Beyond
The future of trustworthy AI depends on the commitments we make today to embed ethical resilience into every stage of technology design, development, and deployment. Professor Cummings calls on all stakeholders to embrace this transformative era of AI innovation by taking collective action—ensuring we maximize AI’s potential for societal good while addressing its inherent risks.
As we approach 2025, the imperative is clear: we must prioritize the creation of ethically resilient and inclusive pathways for participation. From diverse data frameworks to localized, participatory governance models, these systems can serve as the bedrock of a future where technology is not only advanced but also trustworthy and equitable.
As we stand on the cusp of 2025, the intersection of trust and Artificial Intelligence (AI) has never been more critical. The final episode of
As we stand on the cusp of 2025, the intersection of trust and Artificial Intelligence (AI) has never been more critical. The final episode of
Healthcare innovation has long been a cornerstone of human progress, reflecting our collective pursuit of improving quality of life, advancing scientific research, and ensuring equitable
The launch of the International Network of AI Safety Institutes (AISIs) during the Seoul Summit marked a pivotal milestone in the global effort to advance
Artificial Intelligence (AI) is shaping the future of industries, economies, and societies at an unprecedented pace.
A nod to the landmark breakthroughs of today’s gene editing techniques, gene editing-enabled transplants have been named