Centre for Trustworthy Technology

CTT Newsletter

Table of Contents

CTT View

Description: CTT’s analytical lens of trust

A Resurgence of Trust in Technocrats and Cross-Sector Collaboration in the Age of AI

The widespread deployment of artificial intelligence (AI) has seen a renewed imperative for industry and its leaders, or “technocrats”, to focus on enhancing trust in varied applications of AI. The rapid prominence of AI has rejuvenated a reliance on experts to navigate AI’s complexities and harness its potential for social good. It is increasingly crucial that developers, users, and public institutions—both civil society and the government—join forces to ensure the trustworthy design, development, and deployment of AI applications. Only through this inclusive approach can we build AI that truly benefits everyone.

Currently, AI research remains predominantly driven by industry, which plays a leading role in shaping the technology’s trajectory. Industry pioneers enjoy essential inputs such as access to vast datasets, top-tier talent, large-scale models, and computing power. Industry accounts for 96% of the largest AI models  in any given year, the size of models measured in terms of parameters for supervised, unsupervised, and reinforcement learning models. In a telling comparison, the industry has produced 51 notable machine learning models, while academia has produced only 15. The count includes models selected by the research institute Epoch AI based on their historical significance, number of citations, and state-of-the-art advancement. Given this reality, industry stewardship is indispensable for building a trustworthy AI ecosystem in the near and medium term.

Policymakers and regulators are increasingly collaborating with industry to understand and steer AI design-development-deployment cycles to ensure safety and trustworthiness. The initiatives span across the multi-lateral, national, and sector-specific level, embodying a collective desire to establish guardrails.  

In terms of multi-lateral efforts and co-ordination, this dynamic was evident at the inaugural AI Safety Summit in Bletchley and its successor in Seoul. The conscientious collaboration between government and industry has led to the interim AI spearheaded by Turing Award winner Prof. Yoshua Bengio. The report underscores the necessity for international collaboration to enhance understanding and implement safety measures for general-purpose AI, stating that “amid rapid advancements, research on general-purpose AI is currently in a time of scientific discovery and is not yet settled science.” Additionally, the UN released its first report titled Governing AI for Humanity to meet the need for global collaboration, transparency, and ethical guidelines to ensure globally shared benefits, mitigate risks, and promote inclusive development. Over 60 countries are utilizing UNESCOs Readiness Assessment Methodology to help states identify their level of AI technological readiness and implement policies accordingly. Even the OECD has updated its principles on AI to better safeguard information integrity against the potential adverse effects of Generative AI (Read more on our blog). Attempting to implement the OECD principles, the Global Partnership on Artificial Intelligence (GPAI) has established four working groups on data governance, future of work, innovation and commercialization, and responsible AI. It is extremely encouraging to witness efforts towards proactive collaboration for defining scope and modalities in AI governance at the multilateral level. This is essential for harmonizing regulatory frameworks and addressing the complex ethical and societal implications of AI technologies on a global scale.

On the national level of AI governance, the OECD AI Policy Observatory documents 69 countries with AI laws, representing a significant increase over the past few years due to a strategic evolution in AI governance according to research conducted by Deloitte. AI regulation has expanded in three phases: understanding AI, fostering its growth, and actively shaping its development. Nation-States are establishing national AI Safety   institutes as seen in UK, the US and Singapore. In the US, the Senate has hosted several AI “listening sessions” with representatives from BigTech, underscoring the reemergent central role of technocrats in fostering critical discussions. In the EU, partnerships with AI companies are focusing on projects like the AI4Cities initiative, which aims to use AI to reduce carbon emissions in urban areas. In Japan, collaborations with AI firms are addressing the aging population crisis through developments such as AI-powered robots to assist with elder care and smart healthcare solutions to improve medical services. National-level AI collaborations are pivotal in addressing country-specific challenges, fostering a conducive innovation ecosystem, and ensuring that the benefits of AI advancements are equitably distributed across various sectors.

Initiatives to ensure the safe and ethical advancement of AI also emerge from industry. A whistleblower letter garnered support across multiple companies, and an open letter documented industry-wide concerns about AI’s potential threats. At the same time, leading AI firms are publishing principles and best-practices to ensure AI safety. Microsoft has offered insight into their red teaming practices and Anthropic is publishing their policy vulnerability tests. Partnerships and coalitions have also sprung up to face the novel challenges of AI. Adobe established a coalition for content provenance and the Partnership on AI – a consortium of industry, government, and civil society organizations – seeks to ensure the authenticity and transparency of digital content, combat misinformation, and promote ethical AI practices through shared standards for content attribution and provenance. Meta also posted their results of a Community Forum survey in collaboration with Stanford, which revealed a strong public demand for increased transparency in AI development, ethical guidelines to prevent misuse, and greater accountability for AI systems. Tech companies are increasingly collaborating with each other and academia to diversify their engagement with civil groups, fostering inclusive dialogue and ensuring AI development aligns with societal needs and values.

Despite the growing collaborations across multiple levels, the risks and potential harms of AI continue to rise. The OECD’s AI Monitor tracks a growing number of incidents and harms over the past year. AI tools have been misused to spread misinformation, for example during the EU elections when an AI-generated video impersonating an official claimed to have rigged the elections. Other forms of AI misuse are also well-documented, including scams and fraud, harassment, and opinion manipulations. Additionally, lacking infrastructure and deep inequalities limit the potential of AI as a force for good. International Telecommunication Union estimates that 2.6 billion people are still offline, unable to reap the benefits from AI. The continuously evolving AI technologies and applications have thus far outpaced the efforts to harness their benefit safely, responsibly, and equitably.

The global commons have taken cognizance of dealing with the ascent of AI, both by mitigating its risks and distributing its benefits. However, the full comprehension of this aspiration relies on a trustworthy ecosystem that promotes ethics, diversity of perspective, sustainability, equity, accountability, and transparency. Herein, industry plays a critical role, but civil society, governments, and the publics should be elevated to equal statues of importance. Together, they must become stakeholders in setting a trustworthy foundation for a technological disruption with the potential to improve or exacerbate prevalent suffering.

On the Pulse: Trust in the Headlines

Description: Analyzing news headlines in 3-4 sentences with CTT’s “trust perspective”, highlighting the importance of recent developments and the roles of different players (NGOs, industry, govt)

Employees from some of the leading AI companies called for increased transparency and public discussion surrounding the development of artificial intelligence, as well as greater protection for whistleblowers. The concern from industry experts serves as a stark reminder of the growing role of civil society and the public sector to represent the public interest and join the discussion about AI safety.

The need for large quantities of high-quality, accessible data to improve AI models contrasts with the increasing worry about people’s personal information being used when they prompt AI systems. Apple recently promised that their integrated AI technologies allow users to utilize their tools without feeding private information back into the system.

Principles for AI Privacy principles are playing an increasingly central role in conceptualizing regulatory frameworks for data-intensive technologies. The OECD has brought forth six crucial policy considerations to address the convergence of opportunities and challenges in AI and privacy. These insights call for a multi-stakeholder understanding and collaboration to harmonize AI developments with privacy principles.

Data centers are cornerstones of AI infrastructure with significant environmental impact due to high energy-, carbon-, and water-intensity, as the International Energy Agency estimates that data centers, cryptocurrencies, and AI will consume as much energy as Japan by 2026. In Chile, locals have pushed back against the plans to build BigTech data centers in their communities, fearing for their local water supply in the face of an ongoing drought that is expected to last until 2040. They highlight that the centers extract natural resources like energy and water without providing much local benefit.

In a year of pivotal elections across the globe, deepfakes – hyper-realistic AI-generated images and videos – threaten democracy as both candidates and voters both learn to navigate this novel vehicle for (mis-) information without robust regulation beyond platform policies. However, initial impressions about the role of deepfakes in India’s election indicate that they were more often used to campaign rather than deceive. The US and UK, among other countries, will face similar challenges during their elections later this year.

Emerging Research

Description: Analyzing recent publications in 3-4 sentences with CTT’s “trust perspective”, highlighting the importance of discoveries and the ramifications for trust & tech

A third of the world’s population remains disconnected from the internet, as affordability continues to be a major obstacle. Simultaneously, AI poses novel questions for broadband policy, while raising the stakes in broadband access.

The report on the Global Index on Responsible AI highlights several shortcomings in current practices to adopt AI responsibly, including the continued gender inequality, the disregard of workers, the lacking sensitivity for different cultures and languages, as well as the general failure to translate AI governance into responsible adoption of the technology.

Researchers at Anthropic mapped the mind of a Large-Language Model to improve AI interpretability, uncovering interpretable representations of features in the model. The representations grant insight into processes that are relevant for both AI safety and transparency, improving the ability to steer the model precisely during training.

Daron Acemoglu finds that AI will cause a modest increase in productivity for lower-skilled workers, but it will nevertheless deepen inequality due to the larger disparities between capital and labor income. Additionally, he economically quantifies the novel social harms that accompany new applications of AI, such as manipulative algorithms.

As part of their No Language Left Behind Initiative, Meta developed an LLM that can translate 204 different languages, including 150 so-called “low-resource” languages stemming from areas and countries with limited access to the internet and with resultingly sparse online data for models to train on. The capacity to effectively navigate and translate languages does not only help break down barriers across cultures, but it broadens the accessibility of AI tools.

Artificial intelligence enhances the capabilities of persuasive technologies through data-driven personalization of responses. This paper suggests four strategies to mitigate the risk of misuse: protecting privacy, fostering pluralistic competition among persuaders, ensuring accountability, and promoting digital literacy.

AI programs lack intentions yet pose risks of harm to people; this places them in the legal realm of risky agents without intentions. As a result, the liability lies with developers, sellers, and users of AI systems. This requires the establishment of objective standards of behavior for users or objective standards of conduct, designating liability to developers and sellers.

The rising influence of a few Tech Giants in the AI industry has garnered the attention of competition authorities around the world. This paper argues that three successive steps can help uphold the integrity of the competitive AI market: fair competition without harm to competitors or consumers, responsible innovation, and market integrity.

Trust Unicorns

Description: Explain how “trust unicorns” in corporate, academia, laws & regulation, startups, new institutions, conferences are advancing trust in tech in unique ways

The New London Institute for Healthcare Engineering presents a site of cross-sector collaboration, from academia to start-ups to pharmaceutical giants, to channel resources and research into novel products and technologies for the benefit of patients.

Singapore is the third country besides the UK and the US to establish an AI Safety Institute as part of their Digital Trust Centre. The Institute is charged with testing and evaluating AI systems, developing policies and fostering international conversation about the responsible design, development, and deployment of AI.

CTT Updates

Description: Explain how “trust unicorns” in corporate, academia, laws & regulation, startups, new institutions, conferences are advancing trust in tech in unique ways

Blogs

Last month, 28 countries and several industry stakeholders convened at the second iteration of the AI Safety Summit in Seoul...

Metaphors

“Metaphorical thinking — our instinct not just for describing but for comprehending one thing in terms of another, for equating I with an other...

In a landmark move, the Organization for Economic Co-operation and Development (OECD) is revising the inaugural intergovernmental standard...

This month, The UN Commission for Trade and Development released its report on the Digital Economy. The report highlights trends and policies...

IdeasMined Image

“We were inspired by the Internet. The Internet is not a platform. It is a network. Similarly our telecommunication system is a network...

In 2019, Patrick Collison (Founder and CEO of Stripe) and Tyler Cowen (Professor of Economics at George Mason University) called for a...

Publications
OTN Cover Image

An open network is a system where nodes or entities interlink freely, operating without the confines of centralized control.

Media has long served as a vibrant showcase of human creativity, acting as a dynamic platform for storytelling.

As the scientific community deepens its understanding of neurological function, impressive strides in artificial intelligence (AI) enable..

Podcasts

Bringing together interdisciplinary experts to focus on participatory infrastructure throughout the life of data-reliant tools.

Shaping a future where technology amplifies societal trust.

CTT Book Club
Co-Intelligence: Living and Working with AI – Ethan Mollick

Ethan Mollick brings a career in innovation and entrepreneurial strategies to offer four guiding rules for developing co-intelligence through human-AI collaborations. The book suggests a way forward in navigating the “jagged frontier of AI”, it’s tumultuous range of strengths and weaknesses, to discover new use cases. Ethan advises to “be the human in the loop” and suggests collaborative approaches to prompt engineering. Co-intelligence provides tangible advice to leverage the release of generative AI for life and work.

Plurality - Audrey Tang, E Glen Weyl and Collaborators

Plurality provides an in-depth analysis of how Taiwan’s renowned digital democracy, has successfully curated an inclusive and technology-driven growth for its society. The book is a documentation of how digital tools have been designed and deployed to fortify social cohesion fostering a climate of trust. The book elucidates how conscious choices with emerging technology’s potential holds the potential to transform various sectors across the world.

AI Needs You – Verity Harding

AI Needs You by Verity Harding draws powerful lessons from three landmark technological revolutions of the twentieth century—the space race, in vitro fertilization, and the internet—to inspire and empower individuals to engage actively in discussions about AI and its future. The narrative delves into historical technological disruptions, highlighting how we can learn from the past to navigate the complexities of the AI ecosystem. Harding’s insights encourage a proactive approach to shaping AI, ensuring it aligns with democratically determined values and serves society with trust and purpose.

Singularity is Nearer: When We Merge with AI – Ray Kurzweil

Ray Kurzweil’s new sequel to his 2005 bestseller book The Singularity Is Near renews his trend-setting predictions on Artificial Intelligence and its convergence with human intelligence. He contextualizes the explosive growth of LLM chatbots with his broader framework of technological development as an exponential, rather than linear, progression. Kurzweil anticipates profound impacts in medicine with the development of “AI-driven bio simulations” and warns against anti-AI sentiment and ignorance going too far. This visionary follow-up provides new data and guidance on how humans can embrace AI.

CTT Trust Polls


Scroll to Top

Stay Updated with CTT Tech News

Subscribe to our newsletter