Centre for Trustworthy Technology

Measuring AI Progress: A Path to Trust Insights from Global Reports

In 2019, Patrick Collison (Founder and CEO of Stripe) and Tyler Cowen (Professor of Economics at George Mason University) called for a “new science of progress”. They defined progress as “the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries” and cited the dearth of a broad-based intellectual movement focused on understanding the dynamics of progress, or how we can collectively accelerate it.

While the term AI was coined by John McCarthy at the Dartmouth Conference in 1956, significant gains in computational power, large data-set availability, and advancements in deep learning models have made generative artificial intelligence a key area of modern progress worthy of investigation, regulatory significance and societal interest.

Measuring progress in the AI development cycle involves assessing both technological advancements and societal engagement. By mapping this progress against efforts to distribute benefits across businesses, countries, and communities, and initiatives to mitigate harm and alleviate fears, a pathway can be laid to ensure AI aligns with the social good. This approach can enable the development of secure, reliable, and trustworthy AI that fosters societal trust.

There are multifarious benefits to this exercise:

  • Ethical Development and Public Trust

    – Tracking AI progress is crucial to prioritizing ethical development, mitigating risks such as bias and misuse. Transparent and ethical practices build public trust in AI, fostering widespread acceptance and safe integration into society.

  • Informed Policy and Global Standards
    – A clear assessment framework informs policymakers, helping balance innovation with societal values through effective regulations. Examples of such frameworks include the OECD AI Principles, the European Commission’s Ethics guidelines for Trustworthy AI, and the PRISM Impact Assessment Framework for AI by the World Economic Forum. Global coordination, emphasized by the UN AI Advisory Body, addresses cross-border challenges and promotes shared ethical standards.

  • Accountability and Transparency
    – Regular assessments enhance transparency and accountability in AI systems, ensuring developers and organizations are held responsible for their technologies. Examples of these assessment frameworks include the UK Responsible Technology Adoption Unit (formerly Centre for Data Ethics and Innovation) guidelines, the AI Transparency Institute Framework, and the NIST AI Risk Management Framework. These frameworks foster trust and allow stakeholders to understand AI decision-making processes.

Even so, according to the 2024 Edelman Trust Barometer (an analysis of global trust levels across various sectors highlighting the critical factors influencing public trust in technology and innovation), only 30% of respondents globally are in favor of AI, whereas 35% are opposed to it. Pew Research Center’s 2023 Report on AI shows that a growing majority of Americans (52%) now express concern about the increasing use of AI in daily life, up significantly from 38% in December 2022.

Here in, by integrating findings from recent global reports such as 2024 Global Index on Responsible AI, WIPO’s Generative AI Patent Landscape Report, the Shorenstein Centre’s AI Transparency recommendations, and the UN AI Advisory Body Interim Report, an analytical lens for assessing AI progress can be created.

Insights from the UN AI Advisory Body Interim Report

The UN AI Advisory Body Interim Report underscores several critical areas for AI governance and ethical considerations. It stresses the importance of international cooperation and coordination in AI governance to address cross-border challenges and ensure cohesive global standards. At a multilateral level, aligning AI development with human rights principles is crucial for ensuring that AI technologies do not infringe on fundamental rights and freedoms encoded in the UN Charter, International Humans Rights Law, and the Sustainable Development Goal Examples of rights and freedoms relevant to AI include the Right to Privacy, and Freedom from Discrimination. The report also advocates for using AI to support the United Nations Sustainable Development Goals (SDGs), promoting AI applications that contribute to social good. Additionally, it calls for the development and implementation of ethical frameworks that guide AI development with a focus on fairness, accountability, and transparency.

Key Findings from the 2024 Global Index on Responsible AI

Governments play a critical role in the stewardship of AI ecosystems to ensure that AI technologies are developed and deployed in ways that are trustworthy, ethical, and aligned with public interest. By establishing comprehensive regulatory frameworks, promoting transparency, and ensuring accountability, governments can mitigate risks associated with AI while fostering innovation. Tracking the progress of these initiatives is essential for creating effective templates and best practices that can guide other countries in developing their AI policy ecosystems, ultimately leading to a more cohesive and responsible global approach to AI governance.

Leading the Way: Top Performing Countries

Countries such as Canada and the UK are at the forefront of responsible AI. The UK’s AI Sector Deal, as part of its industrial strategy, promotes ethical AI development and deployment through the Responsible Technology Adoption Unit, which advises on transparency, fairness, and accountability in AI systems. Canada’s Directive on Automated Decision-Making mandates that federal AI systems meet standards for transparency and accountability, requiring algorithmic impact assessments and public reporting on AI use to ensure fairness and responsible AI implementation. Increasingly nation-states are establishing policies that promote ethical development and usage of AI technologies. They advance governance, data privacy, and inclusivity, setting high standards for others to follow.

In Europe, nations like Germany and France have developed strong regulatory environments and actively participate in international AI ethics discussions. South Korea shines in Asia, although other countries, including China, face challenges in aligning with global ethical standards. In Africa, countries such as Kenya and Nigeria are making significant strides in responsible AI, focusing on inclusivity and ethical considerations.

Recommendations from the AI Transparency Report

AI transparency frameworks are essential for enhancing accountability and trust by making AI operations clear and understandable to all stakeholders. They help detect and reduce biases, ensuring fairness and equity in AI applications. Furthermore, these frameworks foster public trust and enable informed decision-making, which is vital for the ethical integration of AI across various sectors.

The Shorenstein Centre’s transparency framework offers crucial guidance for assessment of AI progress in the realms of transparency and accountability. It defines transparency in AI systems as the need for clear and accessible documentation of AI systems, ensuring that users and stakeholders understand how AI decisions are made. Transparency efforts should be tailored to specific contexts, addressing the needs of different stakeholders, from developers to end-users. Integrating ethical considerations into transparency practices is highlighted as vital for building trust and accountability.

Highlights from the Generative AI Patent Landscape Report

The World Intellectual Property Organization (WIPO) views patents as key indicators of technological and scientific progress, given their unique and often exclusive technical information. Their recent report unveils several fascinating trends in generative AI and patent filings in foundational models like Generative Adversarial Networks (GANs), variational Autoencoders (VAEs), and transformer models. Other key generative AI patents include natural language processing, image and video generation, music and audio synthesis, and specific applications to healthcare and manufacturing. There has been a significant surge in patent filings related to generative AI, indicating rapid technological advancements. China has shown remarkable activity in the field of generative AI, with a substantial number of patent filings. However, simply leading in the number of patents may not be sufficient. The true measure of progress lies in how these patented technologies are adopted, scaled, and used to advance societal well-being. Countries should focus not only on increasing patent numbers but also on ensuring that these innovations are effectively integrated into the economy and society to drive meaningful progress in quality of life, sustainable development, and economic growth.

Developing a Framework for Assessing AI Progress

  1. Alignment with Human Rights and Sustainable Development: Ensuring AI technologies align with human rights principles and contribute to sustainable development goals is crucial. Evaluating the impact of AI on fundamental rights and freedoms supports this alignment.

  2. Inclusivity and Societal Impact: Efforts to ensure AI technologies benefit diverse societal groups, including marginalized communities, are crucial. This also involves considering the representation of different demographic groups in AI development.

  3. Governance and Regulatory Frameworks: Evaluating the presence and effectiveness of policies and regulations governing AI is essential. Additionally, assessing international collaboration and adherence to global AI ethics standards helps ensure cohesive progress.

  4. Ethical Considerations: Addressing ethical issues such as bias, fairness, and transparency in AI applications is essential. Implementing ethical AI guidelines and standards further supports responsible AI development.

  5. Data Privacy and Security: The strength of data protection laws and practices must be assessed. Evaluating the measures taken to ensure the security and privacy of AI-generated content is vital.

  6. Transparency and Accountability: Measuring the openness of AI systems and the availability of information about AI decision-making processes is important. Assessing the mechanisms in place to hold AI developers and companies accountable for their technologies and evaluating efforts to build and maintain public trust through transparency and ethical practices, are key.

  7. Innovation and Technological Advancements: Tracking patent activity and the rate of technological innovation in AI helps identify key players and emerging trends in AI development.

Trust is essential to ensure AI technologies are developed and used in ways that reflect societal values and priorities. Harnessing trust requires defining, monitoring and evaluating the contours of progress in AI. Recent global reports have provided fascinating insights and metrics to gauge progress and build a path towards trustworthy AI.

In 2019, Patrick Collison (Founder and CEO of Stripe) and Tyler Cowen (Professor of Economics at George Mason University) called for a “new science of progress”. They defined progress as “the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries” and cited the dearth of a broad-based intellectual movement focused on understanding the dynamics of progress, or how we can collectively accelerate it.

While the term AI was coined by John McCarthy at the Dartmouth Conference in 1956, significant gains in computational power, large data-set availability, and advancements in deep learning models have made generative artificial intelligence a key area of modern progress worthy of investigation, regulatory significance and societal interest.

Measuring progress in the AI development cycle involves assessing both technological advancements and societal engagement. By mapping this progress against efforts to distribute benefits across businesses, countries, and communities, and initiatives to mitigate harm and alleviate fears, a pathway can be laid to ensure AI aligns with the social good. This approach can enable the development of secure, reliable, and trustworthy AI that fosters societal trust.

There are multifarious benefits to this exercise:

  • Ethical Development and Public Trust

    – Tracking AI progress is crucial to prioritizing ethical development, mitigating risks such as bias and misuse. Transparent and ethical practices build public trust in AI, fostering widespread acceptance and safe integration into society.


  • Informed Policy and Global Standards
    – A clear assessment framework informs policymakers, helping balance innovation with societal values through effective regulations. Examples of such frameworks include the OECD AI Principles, the European Commission’s Ethics guidelines for Trustworthy AI, and the PRISM Impact Assessment Framework for AI by the World Economic Forum. Global coordination, emphasized by the UN AI Advisory Body, addresses cross-border challenges and promotes shared ethical standards.

  • Accountability and Transparency
    – Regular assessments enhance transparency and accountability in AI systems, ensuring developers and organizations are held responsible for their technologies. Examples of these assessment frameworks include the UK Responsible Technology Adoption Unit (formerly Centre for Data Ethics and Innovation) guidelines, the AI Transparency Institute Framework, and the NIST AI Risk Management Framework. These frameworks foster trust and allow stakeholders to understand AI decision-making processes.

Even so, according to the 2024 Edelman Trust Barometer (an analysis of global trust levels across various sectors highlighting the critical factors influencing public trust in technology and innovation), only 30% of respondents globally are in favor of AI, whereas 35% are opposed to it. Pew Research Center’s 2023 Report on AI shows that a growing majority of Americans (52%) now express concern about the increasing use of AI in daily life, up significantly from 38% in December 2022.

Here in, by integrating findings from recent global reports such as 2024 Global Index on Responsible AI, WIPO’s Generative AI Patent Landscape Report, the Shorenstein Centre’s AI Transparency recommendations, and the UN AI Advisory Body Interim Report, an analytical lens for assessing AI progress can be created.

Insights from the UN AI Advisory Body Interim Report

The UN AI Advisory Body Interim Report underscores several critical areas for AI governance and ethical considerations. It stresses the importance of international cooperation and coordination in AI governance to address cross-border challenges and ensure cohesive global standards. At a multilateral level, aligning AI development with human rights principles is crucial for ensuring that AI technologies do not infringe on fundamental rights and freedoms encoded in the UN Charter, International Humans Rights Law, and the Sustainable Development Goal Examples of rights and freedoms relevant to AI include the Right to Privacy, and Freedom from Discrimination. The report also advocates for using AI to support the United Nations Sustainable Development Goals (SDGs), promoting AI applications that contribute to social good. Additionally, it calls for the development and implementation of ethical frameworks that guide AI development with a focus on fairness, accountability, and transparency.

Key Findings from the 2024 Global Index on Responsible AI

Governments play a critical role in the stewardship of AI ecosystems to ensure that AI technologies are developed and deployed in ways that are trustworthy, ethical, and aligned with public interest. By establishing comprehensive regulatory frameworks, promoting transparency, and ensuring accountability, governments can mitigate risks associated with AI while fostering innovation. Tracking the progress of these initiatives is essential for creating effective templates and best practices that can guide other countries in developing their AI policy ecosystems, ultimately leading to a more cohesive and responsible global approach to AI governance.

Leading the Way: Top Performing Countries

Countries such as Canada and the UK are at the forefront of responsible AI. The UK’s AI Sector Deal, as part of its industrial strategy, promotes ethical AI development and deployment through the Responsible Technology Adoption Unit, which advises on transparency, fairness, and accountability in AI systems. Canada’s Directive on Automated Decision-Making mandates that federal AI systems meet standards for transparency and accountability, requiring algorithmic impact assessments and public reporting on AI use to ensure fairness and responsible AI implementation. Increasingly nation-states are establishing policies that promote ethical development and usage of AI technologies. They advance governance, data privacy, and inclusivity, setting high standards for others to follow.

In Europe, nations like Germany and France have developed strong regulatory environments and actively participate in international AI ethics discussions. South Korea shines in Asia, although other countries, including China, face challenges in aligning with global ethical standards. In Africa, countries such as Kenya and Nigeria are making significant strides in responsible AI, focusing on inclusivity and ethical considerations.

Recommendations from the AI Transparency Report

AI transparency frameworks are essential for enhancing accountability and trust by making AI operations clear and understandable to all stakeholders. They help detect and reduce biases, ensuring fairness and equity in AI applications. Furthermore, these frameworks foster public trust and enable informed decision-making, which is vital for the ethical integration of AI across various sectors.

The Shorenstein Centre’s transparency framework offers crucial guidance for assessment of AI progress in the realms of transparency and accountability. It defines transparency in AI systems as the need for clear and accessible documentation of AI systems, ensuring that users and stakeholders understand how AI decisions are made. Transparency efforts should be tailored to specific contexts, addressing the needs of different stakeholders, from developers to end-users. Integrating ethical considerations into transparency practices is highlighted as vital for building trust and accountability.

Highlights from the Generative AI Patent Landscape Report

The World Intellectual Property Organization (WIPO) views patents as key indicators of technological and scientific progress, given their unique and often exclusive technical information. Their recent report unveils several fascinating trends in generative AI and patent filings in foundational models like Generative Adversarial Networks (GANs), variational Autoencoders (VAEs), and transformer models. Other key generative AI patents include natural language processing, image and video generation, music and audio synthesis, and specific applications to healthcare and manufacturing. There has been a significant surge in patent filings related to generative AI, indicating rapid technological advancements. China has shown remarkable activity in the field of generative AI, with a substantial number of patent filings. However, simply leading in the number of patents may not be sufficient. The true measure of progress lies in how these patented technologies are adopted, scaled, and used to advance societal well-being. Countries should focus not only on increasing patent numbers but also on ensuring that these innovations are effectively integrated into the economy and society to drive meaningful progress in quality of life, sustainable development, and economic growth.

Developing a Framework for Assessing AI Progress

  1. Alignment with Human Rights and Sustainable Development: Ensuring AI technologies align with human rights principles and contribute to sustainable development goals is crucial. Evaluating the impact of AI on fundamental rights and freedoms supports this alignment.

  2. Inclusivity and Societal Impact: Efforts to ensure AI technologies benefit diverse societal groups, including marginalized communities, are crucial. This also involves considering the representation of different demographic groups in AI development.

  3. Governance and Regulatory Frameworks: Evaluating the presence and effectiveness of policies and regulations governing AI is essential. Additionally, assessing international collaboration and adherence to global AI ethics standards helps ensure cohesive progress.

  4. Ethical Considerations: Addressing ethical issues such as bias, fairness, and transparency in AI applications is essential. Implementing ethical AI guidelines and standards further supports responsible AI development.

  5. Data Privacy and Security: The strength of data protection laws and practices must be assessed. Evaluating the measures taken to ensure the security and privacy of AI-generated content is vital.

  6. Transparency and Accountability: Measuring the openness of AI systems and the availability of information about AI decision-making processes is important. Assessing the mechanisms in place to hold AI developers and companies accountable for their technologies and evaluating efforts to build and maintain public trust through transparency and ethical practices, are key.

  7. Innovation and Technological Advancements: Tracking patent activity and the rate of technological innovation in AI helps identify key players and emerging trends in AI development.

Trust is essential to ensure AI technologies are developed and used in ways that reflect societal values and priorities. Harnessing trust requires defining, monitoring and evaluating the contours of progress in AI. Recent global reports have provided fascinating insights and metrics to gauge progress and build a path towards trustworthy AI.

Related Blogs

Scroll to Top