Centre for Trustworthy Technology

Ideas Mined from Trustworthy Tech Dialogues

Trust and AI: Reflections on 2024 and the Road Ahead

As we stand on the cusp of 2025, the intersection of trust and Artificial Intelligence (AI) has never been more critical. The final episode of Trustworthy Tech Dialogues for the year brings together two visionaries, Glen Weyl and Audrey Tang, to reflect on the transformative advancements of 2024 and explore actionable strategies for the year ahead.

With unwavering dedication to advancing technology for the collective good, Weyl and Tang delve into pivotal themes shaping the future of AI and society. These include the evolution of alignment assemblies, the narrowing divide between open-source and proprietary AI models, the rise of Sovereign AI and Global Governance Frameworks, the pressing challenges of deepfakes and information integrity, the acceleration of AI adoption across industries, and their vision for 2025.

Their insights present a compelling framework for a future where technology is deeply aligned with societal values, fostering trust, inclusivity, and collaboration across a diverse array of stakeholders.

In Conversation:

Glen Weyl, the Founder of RadicalxChange and Research Lead at Microsoft’s Plural Technology Collaboratory, has long championed the use of technology to catalyze social progress. As Chair of the Plurality Institute and Senior Advisor to the GETTING-Plurality, his work consistently intersects innovation and inclusion.

Audrey Tang is a globally respected technologist known for her contributions to free software development and her advocacy for participatory governance. Recognized by TIME magazine as one of the 100 Most Influential People in AI, she continues to advance collaborative and open approaches to technology.

In 2024, Glen Weyl and Audrey Tang co-authored Plurality: The Future of Collaborative Technology and Democracy, a landmark publication that explores the power of technology to enhance democratic values, foster mutual recognition, and bridge cultural divides.

Together, they bring their collective expertise to reflect on the major technological and societal milestones of 2024, offering a vision for a collaborative and inclusive digital future.

Alignment Assemblies: A Democratic Framework for AI

Alignment assemblies were designed to bridge the gap between emerging AI technologies and collective societal values. These assemblies invite diverse participants—both online and in person—to engage in guided discussions on their needs, preferences, and fears concerning AI adoption.

This innovative approach, launched in 2023 at the White House Summit for Democracy, has since evolved into a powerful mechanism for societal consensus on AI’s boundary conditions across various industries and regions. Audrey reflects on her experience organizing a citizens’ assembly that tackled challenges such as deepfakes and crypto scams. Using a randomized, stratified sampling method participants were selected to represent society as a microcosm. AI-supported tools facilitated the assembly, enabling efficient discussions and real-time consensus building through virtual chat rooms and transcription services.

“This is a very quick way to show that democracy can evolve as quickly as emerging technologies,”  Tang observes, Audrey emphasized that AI can be aligned with the public interest through actionable feedback loops, showcasing a dynamic process where democracy keeps pace with emerging technologies to strengthen trust and collective action. Crucially, the goal isn’t to speed up or stop AI development, but to ensure society collectively shapes its course. As Audrey aptly put it, “It’s not just about pressing the gas or the brakes—it’s about steering. When everyone has their hands on the steering wheel, society isn’t forcibly aligned by AI; instead, we align together.”

The Narrowing Gap: Open-Source and Proprietary AI Models

One of the most significant trends of 2024 was the narrowing performance gap between proprietary and open-source AI models. Several reports indicate that the capabilities of the best proprietary models and their open-source counterparts are now only a year apart. Glen and Audrey delved into the implications of this trend, highlighting the importance of shifting the narrative from competition between models to exploring the transformative potential of convergence and collaboration.

In 2024, Open-source models like Tongyi Qianwen (Qwen) have made remarkable strides in both capability and accessibility, driven by platforms like Hugging Face. These advancements empower communities to align AI tools with local values and narratives by remixing smaller, specialized models tailored to their unique cultural contexts. As a result, these models not only reflect diverse cultural norms but also champion decentralized development, enhance transparency, and support pluralistic value systems—recognizing that multiple, equally valid values can coexist, even in conflict, within a shared framework of respect and inclusion.

Unrestricted access to open-source models offers opportunities but also raises concerns about misuse, such as cyber warfare and disinformation. Audrey Tang and Glen Weyl caution that abandoning open-source initiatives risks monopolistic or authoritarian control of AI. Open-source models provide essential checks on proprietary systems, ensuring they prioritize safety, inclusivity, and societal values. Rather than competing, Tang and Weyl envision open-source and proprietary models converging on shared principles. They advocate for a balanced approach where open-source models enhance transparency and complement proprietary systems, safeguarding public trust in AI.

Proactive Strategies Against AI-Driven Misinformation

As 2024 unfolded, a year defined as ‘the year of global elections’ with several countries voting for new leadership, fears of AI-generated misinformation and deepfakes loomed large. Yet, as the year closes, experts report that AI-driven disinformation had a far smaller impact on election outcomes than anticipated.

Audrey emphasizes the power of prebunking—a proactive strategy to counter disinformation. “Unlike debunking, which can be accusatory and polarizing, prebunking fosters democratic resilience by encouraging collective action and critical thinking,” Tang explains. Audrey highlights that this approach not only combats misinformation effectively but also fosters collective action and strengthens democratic resilience by uniting communities against shared challenges.

Paving the Path: Sovereign AI, Global Frameworks, and Industry Transformation

Sovereign AI: Enabling Cultural and Technological Independence
Sovereign AI represents a nation’s pursuit of AI development rooted in its own infrastructure, data, workforce, and business networks—cultivating both technological self-reliance and cultural identity. More than an economic asset, Sovereign AI serves as a ‘translational layer’ between global foundational models and localized applications, enabling seamless adaptation to cultural norms and community-specific needs. Glen articulated that, “By translating global norms into local practices, Sovereign AI can foster mutual understanding and adaptability while addressing the unique needs of communities.” The rapid growth of Sovereign AI initiatives across regions has created new avenues for inclusive innovation, participatory design, and the enrichment of foundational AI systems with diverse cultural and linguistic perspectives.

Global Frameworks: Setting the Stage for Alignment

In 2024, as Sovereign AI efforts gained momentum, multilateral institutions took bold steps to establish global AI alignment frameworks, including the adoption of the Global Digital Compact. These frameworks aim to guide AI development under universally agreed principles. However, the challenge lies in reconciling these top-down standards with the grassroots dynamism of Sovereign AI. Audrey and Glen argue that Sovereign AI can play a pivotal role in this balancing act, acting as a bridge that interprets global norms while fostering localized adaptation, innovation, and resilience.

Industry Applications: Transforming Systems for Trust and Growth

In the industrial landscape, the imperative has shifted from merely adopting AI to strategically leveraging it for transformative impact across operations, value chains, and decision-making processes. Organizations are no longer asking if AI should be integrated but rather how it can drive innovation, efficiency, and trust simultaneously. Glen emphasizes the critical role of decentralized governance models in addressing these trust imperatives, providing a framework for implementing AI responsibly and ethically.

Drawing on the transformative adoption of electricity, Glen highlights a fundamental insight: technological breakthroughs alone are insufficient. True success demands systemic reorganization—rethinking processes, workflows, and governance structures to create an environment where innovation can thrive. Glen argues that overcoming socio-technical barriers, such as resolving inefficiencies in data-sharing mechanisms, is not just a technical necessity but a cornerstone for fostering trust, accelerating AI integration, and fully realizing its transformative potential across industries.

Guiding Principles for 2025: Voice, Choice, and Stake

As we look toward 2025, Audrey proposes three guiding principles for redefining the social contract in emerging technology and trust: voice, choice, and stake.

  • Voice ensures that individuals have meaningful input into how technology interacts with society, fostering transparency and public engagement.
  • Choice empowers users by aligning operators with pro-social values and providing real alternatives, enabling people to make decisions that reflect their preferences and ethics.
  • Stake emphasizes equitable participation, ensuring fair compensation, proper attribution, and shared influence over the trajectory of technological development, thereby creating a collaborative and inclusive ecosystem.

Glen underscores the importance of these principles, calling on governments and corporations to prioritize inclusive participation and invest in collaborative infrastructures that promote global interoperability and consensus. Glen asserts that, “We must move beyond defensive stances and litigation to recognize and reward those who contribute to AI’s success.”  This shift from confrontation to collaboration, Glen argues, is essential to unlocking AI’s full potential while maintaining trust and accountability.

A Vision for Trustworthy AI in 2025

As we close the chapter on 2024, a year of profound advancements and challenges in AI, the reflections of Audrey Tang and Glen Weyl offer a compelling roadmap for the future. Their insights emphasize a paradigm where technology is not only a tool for innovation but a force that aligns with societal values, fosters trust, and empowers communities to actively shape their digital destinies.

Their vision for 2025 calls for prioritizing collaboration over confrontation, urging stakeholders—governments, corporations, technologists, and citizens alike—to move beyond defensive strategies. Instead, they advocate for investment in inclusive structures that foster trust, enable global interoperability, and drive equitable progress.

The experiences of 2024 underscore an essential truth: the future of AI is neither predetermined nor singular. It is a collective endeavor, shaped by the choices we make today. By anchoring technological development in shared values, amplifying diverse voices, and embracing collaboration across boundaries, we can guide AI toward a future that inspires trust, enriches lives, and strengthens the fabric of society.

Trust and AI: Reflections on 2024 and the Road Ahead

As we stand on the cusp of 2025, the intersection of trust and Artificial Intelligence (AI) has never been more critical. The final episode of Trustworthy Tech Dialogues for the year brings together two visionaries, Glen Weyl and Audrey Tang, to reflect on the transformative advancements of 2024 and explore actionable strategies for the year ahead.

With unwavering dedication to advancing technology for the collective good, Weyl and Tang delve into pivotal themes shaping the future of AI and society. These include the evolution of alignment assemblies, the narrowing divide between open-source and proprietary AI models, the rise of Sovereign AI and Global Governance Frameworks, the pressing challenges of deepfakes and information integrity, the acceleration of AI adoption across industries, and their vision for 2025.

Their insights present a compelling framework for a future where technology is deeply aligned with societal values, fostering trust, inclusivity, and collaboration across a diverse array of stakeholders.

In Conversation:

Glen Weyl, the Founder of RadicalxChange and Research Lead at Microsoft’s Plural Technology Collaboratory, has long championed the use of technology to catalyze social progress. As Chair of the Plurality Institute and Senior Advisor to the GETTING-Plurality, his work consistently intersects innovation and inclusion.

Audrey Tang is a globally respected technologist known for her contributions to free software development and her advocacy for participatory governance. Recognized by TIME magazine as one of the 100 Most Influential People in AI, she continues to advance collaborative and open approaches to technology.

In 2024, Glen Weyl and Audrey Tang co-authored Plurality: The Future of Collaborative Technology and Democracy, a landmark publication that explores the power of technology to enhance democratic values, foster mutual recognition, and bridge cultural divides.

Together, they bring their collective expertise to reflect on the major technological and societal milestones of 2024, offering a vision for a collaborative and inclusive digital future.

Alignment Assemblies: A Democratic Framework for AI

Alignment assemblies were designed to bridge the gap between emerging AI technologies and collective societal values. These assemblies invite diverse participants—both online and in person—to engage in guided discussions on their needs, preferences, and fears concerning AI adoption.

This innovative approach, launched in 2023 at the White House Summit for Democracy, has since evolved into a powerful mechanism for societal consensus on AI’s boundary conditions across various industries and regions. Audrey reflects on her experience organizing a citizens’ assembly that tackled challenges such as deepfakes and crypto scams. Using a randomized, stratified sampling method participants were selected to represent society as a microcosm. AI-supported tools facilitated the assembly, enabling efficient discussions and real-time consensus building through virtual chat rooms and transcription services.

“This is a very quick way to show that democracy can evolve as quickly as emerging technologies,”  Tang observes, Audrey emphasized that AI can be aligned with the public interest through actionable feedback loops, showcasing a dynamic process where democracy keeps pace with emerging technologies to strengthen trust and collective action. Crucially, the goal isn’t to speed up or stop AI development, but to ensure society collectively shapes its course. As Audrey aptly put it, “It’s not just about pressing the gas or the brakes—it’s about steering. When everyone has their hands on the steering wheel, society isn’t forcibly aligned by AI; instead, we align together.”

The Narrowing Gap: Open-Source and Proprietary AI Models

One of the most significant trends of 2024 was the narrowing performance gap between proprietary and open-source AI models. Several reports indicate that the capabilities of the best proprietary models and their open-source counterparts are now only a year apart. Glen and Audrey delved into the implications of this trend, highlighting the importance of shifting the narrative from competition between models to exploring the transformative potential of convergence and collaboration.

In 2024, Open-source models like Tongyi Qianwen (Qwen) have made remarkable strides in both capability and accessibility, driven by platforms like Hugging Face. These advancements empower communities to align AI tools with local values and narratives by remixing smaller, specialized models tailored to their unique cultural contexts. As a result, these models not only reflect diverse cultural norms but also champion decentralized development, enhance transparency, and support pluralistic value systems—recognizing that multiple, equally valid values can coexist, even in conflict, within a shared framework of respect and inclusion.

Unrestricted access to open-source models offers opportunities but also raises concerns about misuse, such as cyber warfare and disinformation. Audrey Tang and Glen Weyl caution that abandoning open-source initiatives risks monopolistic or authoritarian control of AI. Open-source models provide essential checks on proprietary systems, ensuring they prioritize safety, inclusivity, and societal values. Rather than competing, Tang and Weyl envision open-source and proprietary models converging on shared principles. They advocate for a balanced approach where open-source models enhance transparency and complement proprietary systems, safeguarding public trust in AI.

Proactive Strategies Against AI-Driven Misinformation

As 2024 unfolded, a year defined as ‘the year of global elections’ with several countries voting for new leadership, fears of AI-generated misinformation and deepfakes loomed large. Yet, as the year closes, experts report that AI-driven disinformation had a far smaller impact on election outcomes than anticipated.

Audrey emphasizes the power of prebunking—a proactive strategy to counter disinformation. “Unlike debunking, which can be accusatory and polarizing, prebunking fosters democratic resilience by encouraging collective action and critical thinking,” Tang explains. Audrey highlights that this approach not only combats misinformation effectively but also fosters collective action and strengthens democratic resilience by uniting communities against shared challenges.

Paving the Path: Sovereign AI, Global Frameworks, and Industry Transformation

Sovereign AI: Enabling Cultural and Technological Independence
Sovereign AI represents a nation’s pursuit of AI development rooted in its own infrastructure, data, workforce, and business networks—cultivating both technological self-reliance and cultural identity. More than an economic asset, Sovereign AI serves as a ‘translational layer’ between global foundational models and localized applications, enabling seamless adaptation to cultural norms and community-specific needs. Glen articulated that, “By translating global norms into local practices, Sovereign AI can foster mutual understanding and adaptability while addressing the unique needs of communities.” The rapid growth of Sovereign AI initiatives across regions has created new avenues for inclusive innovation, participatory design, and the enrichment of foundational AI systems with diverse cultural and linguistic perspectives.

Global Frameworks: Setting the Stage for Alignment

In 2024, as Sovereign AI efforts gained momentum, multilateral institutions took bold steps to establish global AI alignment frameworks, including the adoption of the Global Digital Compact. These frameworks aim to guide AI development under universally agreed principles. However, the challenge lies in reconciling these top-down standards with the grassroots dynamism of Sovereign AI. Audrey and Glen argue that Sovereign AI can play a pivotal role in this balancing act, acting as a bridge that interprets global norms while fostering localized adaptation, innovation, and resilience.

Industry Applications: Transforming Systems for Trust and Growth

In the industrial landscape, the imperative has shifted from merely adopting AI to strategically leveraging it for transformative impact across operations, value chains, and decision-making processes. Organizations are no longer asking if AI should be integrated but rather how it can drive innovation, efficiency, and trust simultaneously. Glen emphasizes the critical role of decentralized governance models in addressing these trust imperatives, providing a framework for implementing AI responsibly and ethically.

Drawing on the transformative adoption of electricity, Glen highlights a fundamental insight: technological breakthroughs alone are insufficient. True success demands systemic reorganization—rethinking processes, workflows, and governance structures to create an environment where innovation can thrive. Glen argues that overcoming socio-technical barriers, such as resolving inefficiencies in data-sharing mechanisms, is not just a technical necessity but a cornerstone for fostering trust, accelerating AI integration, and fully realizing its transformative potential across industries.

Guiding Principles for 2025: Voice, Choice, and Stake

As we look toward 2025, Audrey proposes three guiding principles for redefining the social contract in emerging technology and trust: voice, choice, and stake.

  • Voice ensures that individuals have meaningful input into how technology interacts with society, fostering transparency and public engagement.
  • Choice empowers users by aligning operators with pro-social values and providing real alternatives, enabling people to make decisions that reflect their preferences and ethics.
  • Stake emphasizes equitable participation, ensuring fair compensation, proper attribution, and shared influence over the trajectory of technological development, thereby creating a collaborative and inclusive ecosystem.

Glen underscores the importance of these principles, calling on governments and corporations to prioritize inclusive participation and invest in collaborative infrastructures that promote global interoperability and consensus. Glen asserts that, “We must move beyond defensive stances and litigation to recognize and reward those who contribute to AI’s success.”  This shift from confrontation to collaboration, Glen argues, is essential to unlocking AI’s full potential while maintaining trust and accountability.

A Vision for Trustworthy AI in 2025

As we close the chapter on 2024, a year of profound advancements and challenges in AI, the reflections of Audrey Tang and Glen Weyl offer a compelling roadmap for the future. Their insights emphasize a paradigm where technology is not only a tool for innovation but a force that aligns with societal values, fosters trust, and empowers communities to actively shape their digital destinies.

Their vision for 2025 calls for prioritizing collaboration over confrontation, urging stakeholders—governments, corporations, technologists, and citizens alike—to move beyond defensive strategies. Instead, they advocate for investment in inclusive structures that foster trust, enable global interoperability, and drive equitable progress.

The experiences of 2024 underscore an essential truth: the future of AI is neither predetermined nor singular. It is a collective endeavor, shaped by the choices we make today. By anchoring technological development in shared values, amplifying diverse voices, and embracing collaboration across boundaries, we can guide AI toward a future that inspires trust, enriches lives, and strengthens the fabric of society.

Related Blogs

Scroll to Top