Ideas Mined from Trustworthy Tech Dialogues
Back to Blogs Artificial Intelligence in HR: Promises, Pitfalls, and Protecting Civil Rights with Keith Sonderling Trustworthy AI adoption is not just a technical challenge,
Centre for Trustworthy Technology
The exponential rise of generative Artificial Intelligence (AI) over the last few years is radically reshaping how information is created and consumed, calling for an urgent and honest reckoning with its authenticity and reliability. As AI becomes even more dominant in shaping our future, the threat of disinformation looms larger—among the most pressing of these challenges is its unprecedented ability to generate and spread falsehoods at scale. Ranked as the second highest global risk in the World Economic Forum’s 2024 Global Risk Report, misinformation has the potential to unravel the core pillars of democracy, public health, and societal cohesion.
In response to this critical challenge, the United Nations
(UN) Global Digital Compact (GDC) offers a blueprint to protect the integrity of information. It articulates “Access to relevant, reliable and accurate information and knowledge is essential for an inclusive, open, safe and secure digital space.” The GDC outlines a robust framework to counter the destabilizing forces of disinformation, rebuilding trust and ensuring that the digital world of tomorrow is more secure, transparent, and resilient.
Conceptualizing Information Integrity
Information integrity refers to the accuracy, credibility, and authenticity of the content we encounter in digital spaces. The rise of emerging technologies, particularly generative AI, has made it increasingly difficult to distinguish authentic content from misleading narratives. The emergence of deepfakes risks adversely and maliciously influencing public opinion and eroding trust in digital ecosystems.
In the past three months, the OECD has reported several deepfake incidents ranging from manipulated videos to fabricated text and speech falsely attributed to political candidates from around the globe. In response, leading technology companies have joined forces with media organizations to form the Coalition for Content Provenance and Authenticity (C2PA). The C2PA is playing a crucial role in developing technical standards designed to empower both content creators and consumers by enabling them to trace the origins of media, thereby enhancing transparency and trust in the digital information ecosystem. This collaborative effort marks a significant step forward in the global fight against misinformation and the protection of content integrity.
A point of view paper by the Centre for Trustworthy Technology delves into the critical need for balancing innovation with information integrity in today’s rapidly evolving media landscape. The report underscores that credibility must be paramount in an era defined by the swift generation and dissemination of content. To cultivate trust and bolster the reputation of technology platforms, the report advocates for implementing robust mechanisms that allow audiences to authenticate both the origin and integrity of the content they encounter.
Furthermore, the report offers actionable recommendations to enhance information integrity, from advanced software and hardware interventions to the potential role of regulatory frameworks. It also highlights the importance of standardized contracts between model developers and data providers to ensure fairness and equity. Most importantly, the publication calls for empowering and educating both the workforce and the public on strategies to embed trustworthiness into the information ecosystem, laying the foundation for a more transparent and reliable digital future.
Information Integrity and the Sustainable Development Goals (SDGs)
Information integrity is critical to building and retaining societal trust. For instance, SDG 3 (Good Health and Well-being) relies on accurate health information to combat public health crises, SDG 9 (Industry, Innovation, and Infrastructure) depends on transparent industry reporting to drive innovation, and SDG 16 (Peace, Justice, and Strong Institutions) is directly impacted by the spread of disinformation that weakens democratic processes and institutions.
In response to these challenges, the UN’s GDC outlined several commitments to support information integrity and the SDGs by 2030, including:
The Industry Imperative for Information Integrity
The GDC notably calls for digital platforms and tech companies to promote information integrity by enhancing accountability and transparency in content moderation and algorithmic decisions. Additionally, the GDC highlights the need to empower users to make informed choices through clear terms of service and data usage policies, both of which are crucial building blocks for trust in the digital ecosystem.
Furthermore, the GDC advocates for social media platforms to strengthen user privacy while providing researchers with access to data. Measures such as labeling, watermarking, and authenticity certification for AI-generated content hold significant potential to help mitigate varied risks of disinformation. Similarly, it urges companies and developers to actively pursue and implement solutions that address the potential harms of artificial intelligence-enabled content, including hate speech and discrimination. It calls for greater transparency, encouraging organizations to not only develop these safeguards but also to openly communicate their actions and strategies to the public, ensuring accountability and fostering trust in the responsible use of AI technologies.
The future of a secure and resilient digital ecosystem depends on a unified commitment to information integrity. Only coordinated efforts—uniting governments, industry leaders, and civil society—can build the trust essential for a fair and equitable digital world. Transparency, accountability, and collaboration must guide technological advancements, social initiatives, and regulatory frameworks. By doing so, the global community can lay the foundation for a more inclusive, ethical, and reliable digital future—one anchored in a strong and trustworthy information ecosystem.
The exponential rise of generative Artificial Intelligence (AI) over the last few years is radically reshaping how information is created and consumed, calling for an urgent and honest reckoning with its authenticity and reliability. As AI becomes even more dominant in shaping our future, the threat of disinformation looms larger—among the most pressing of these challenges is its unprecedented ability to generate and spread falsehoods at scale. Ranked as the second highest global risk in the World Economic Forum’s 2024 Global Risk Report, misinformation has the potential to unravel the core pillars of democracy, public health, and societal cohesion.
In response to this critical challenge, the United Nations
(UN) Global Digital Compact (GDC) offers a blueprint to protect the integrity of information. It articulates “Access to relevant, reliable and accurate information and knowledge is essential for an inclusive, open, safe and secure digital space.” The GDC outlines a robust framework to counter the destabilizing forces of disinformation, rebuilding trust and ensuring that the digital world of tomorrow is more secure, transparent, and resilient.
Conceptualizing Information Integrity
Information integrity refers to the accuracy, credibility, and authenticity of the content we encounter in digital spaces. The rise of emerging technologies, particularly generative AI, has made it increasingly difficult to distinguish authentic content from misleading narratives. The emergence of deepfakes risks adversely and maliciously influencing public opinion and eroding trust in digital ecosystems.
In the past three months, the OECD has reported several deepfake incidents ranging from manipulated videos to fabricated text and speech falsely attributed to political candidates from around the globe. In response, leading technology companies have joined forces with media organizations to form the Coalition for Content Provenance and Authenticity (C2PA). The C2PA is playing a crucial role in developing technical standards designed to empower both content creators and consumers by enabling them to trace the origins of media, thereby enhancing transparency and trust in the digital information ecosystem. This collaborative effort marks a significant step forward in the global fight against misinformation and the protection of content integrity.
A point of view paper by the Centre for Trustworthy Technology delves into the critical need for balancing innovation with information integrity in today’s rapidly evolving media landscape. The report underscores that credibility must be paramount in an era defined by the swift generation and dissemination of content. To cultivate trust and bolster the reputation of technology platforms, the report advocates for implementing robust mechanisms that allow audiences to authenticate both the origin and integrity of the content they encounter.
Furthermore, the report offers actionable recommendations to enhance information integrity, from advanced software and hardware interventions to the potential role of regulatory frameworks. It also highlights the importance of standardized contracts between model developers and data providers to ensure fairness and equity. Most importantly, the publication calls for empowering and educating both the workforce and the public on strategies to embed trustworthiness into the information ecosystem, laying the foundation for a more transparent and reliable digital future.
Information Integrity and the Sustainable Development Goals (SDGs)
Information integrity is critical to building and retaining societal trust. For instance, SDG 3 (Good Health and Well-being) relies on accurate health information to combat public health crises, SDG 9 (Industry, Innovation, and Infrastructure) depends on transparent industry reporting to drive innovation, and SDG 16 (Peace, Justice, and Strong Institutions) is directly impacted by the spread of disinformation that weakens democratic processes and institutions.
In response to these challenges, the UN’s GDC outlined several commitments to support information integrity and the SDGs by 2030, including:
The Industry Imperative for Information Integrity
The GDC notably calls for digital platforms and tech companies to promote information integrity by enhancing accountability and transparency in content moderation and algorithmic decisions. Additionally, the GDC highlights the need to empower users to make informed choices through clear terms of service and data usage policies, both of which are crucial building blocks for trust in the digital ecosystem.
Furthermore, the GDC advocates for social media platforms to strengthen user privacy while providing researchers with access to data. Measures such as labeling, watermarking, and authenticity certification for AI-generated content hold significant potential to help mitigate varied risks of disinformation. Similarly, it urges companies and developers to actively pursue and implement solutions that address the potential harms of artificial intelligence-enabled content, including hate speech and discrimination. It calls for greater transparency, encouraging organizations to not only develop these safeguards but also to openly communicate their actions and strategies to the public, ensuring accountability and fostering trust in the responsible use of AI technologies.
The future of a secure and resilient digital ecosystem depends on a unified commitment to information integrity. Only coordinated efforts—uniting governments, industry leaders, and civil society—can build the trust essential for a fair and equitable digital world. Transparency, accountability, and collaboration must guide technological advancements, social initiatives, and regulatory frameworks. By doing so, the global community can lay the foundation for a more inclusive, ethical, and reliable digital future—one anchored in a strong and trustworthy information ecosystem.
Back to Blogs Artificial Intelligence in HR: Promises, Pitfalls, and Protecting Civil Rights with Keith Sonderling Trustworthy AI adoption is not just a technical challenge,
Today’s industry leaders are tasked with driving growth amidst a rapidly changing global landscape. Innovation, creativity, and curiosity are not only buzzwords, but the foundational
Technological advancement disrupts the workplace, reshaping industries and redefining the nature of work. Innovation has always led to fundamental shifts in the nature of work,
In this episode of Trustworthy Tech Dialogues, Patricia Thaine delves into the development of privacy-preserving solutions essential for scaling generative Artificial…
As artificial intelligence (AI) becomes increasingly woven into the fabric of our daily lives, its potential to drive positive change across…
Insights from the UNCTAD Report (Part 2/2)
The world is going digital. Although universal access to the internet remains a challenge, the uneven…