Center for Trustworthy Technology

OECD Updates Recommendation on Artificial Intelligence to Reflect Technological Advancements and Policy Developments

In a landmark move, the Organization for Economic Co-operation and Development (OECD) is revising the inaugural intergovernmental standard on artificial intelligence (AI), initiated in May 2019. The update, which took place at the 2024 Meeting of the Council at Ministerial level, aims to retain the recommendation’s relevance through revisions pertinent to the industry’s most recent developments. 

The revisions primarily focus on the growing significance of addressing misinformation and disinformation, particularly in the context of generative AI. The recommendation calls for the need to safeguard information integrity in the age of increasingly realistic content, particularly to protect against the potential for misuse and the unintended consequences of such. It also emphasizes the importance of respecting the freedom of expression and other fundamental rights.

One key aspect of the revision is the clarification of the information that AI actors should provide regarding AI development. Transparency and responsible disclosure are critical for building trust and ensuring that stakeholders are aware of their interactions with these systems. Therefore, the updated recommendation calls for AI developers and other stakeholders to provide transparent and meaningful information about the capabilities, limitations, and factors that influence the output of AI systems. 

Safety concerns have also been addressed in the revision of the recommendation. It underscores the importance of having mechanisms in place to override, repair, and decommission AI systems safely if they pose undue harm or exhibit undesired behavior. This update reflects the growing recognition that AI systems must be designed with safety and free will, allowing for human intervention when necessary. 

The revised principles also strongly emphasize responsible business conduct throughout the AI development and implementation lifecycle. It encourages cooperation among AI developers, researchers, industry leaders, system users, and other stakeholders to address risks and promote responsible practices. This collaborative approach recognizes that the development and deployment of trustworthy AI requires the involvement and commitment of all relevant parties. 

Furthermore, the updated recommendation introduces an explicit reference to environmental sustainability. AI has the potential to contribute to sustainable development and address environmental challenges, but it must be developed and deployed in an environmentally responsible manner. 

As AI policy initiatives proliferate worldwide, the OECD stresses the need for jurisdictions to work together to promote interoperable governance and policy environments. The revised recommendation calls for governments to actively cooperate within and across jurisdictions to foster a harmonized approach to AI governance. This is crucial for preventing fragmentation and ensuring that future AI systems can operate seamlessly across borders while adhering to shared principles and standards. 

To support the implementation of the revised recommendation, the OECD’s Digital Policy Committee, through its Working Party on AI Governance (AIGO), will continue to develop practical guidance and provide a forum for exchanging information on AI policies and activities. The OECD.AI Policy Observatory and the OCED.AI Network of Experts will play crucial roles in fostering multi-stakeholder dialogue and sharing best practices. 

As AI continues its rapid ascension, the OECD’s updated recommendation serves as a vital guide for policymakers, businesses, and other stakeholders. By addressing emerging challenges, promoting transparency, and emphasizing responsible business conduct, the revised guidelines set the stage for the development and deployment of trustworthy AI. 

The OECD’s commitment to regularly reviewing and updating its guidelines demonstrates the organization’s proactive approach to shaping the future of AI governance. At the Centre for Trustworthy Technology, we share these same values. As we navigate the complex landscape of AI research, new guidelines should offer a solid foundation for international cooperation and the promotion of human-centric, trustworthy AI that enhances well-being, fosters innovation, and respects fundamental rights and values.

In a landmark move, the Organization for Economic Co-operation and Development (OECD) is revising the inaugural intergovernmental standard on artificial intelligence (AI), initiated in May 2019. The update, which took place at the 2024 Meeting of the Council at Ministerial level, aims to retain the recommendation’s relevance through revisions pertinent to the industry’s most recent developments. 

The revisions primarily focus on the growing significance of addressing misinformation and disinformation, particularly in the context of generative AI. The recommendation calls for the need to safeguard information integrity in the age of increasingly realistic content, particularly to protect against the potential for misuse and the unintended consequences of such. It also emphasizes the importance of respecting the freedom of expression and other fundamental rights.

One key aspect of the revision is the clarification of the information that AI actors should provide regarding AI development. Transparency and responsible disclosure are critical for building trust and ensuring that stakeholders are aware of their interactions with these systems. Therefore, the updated recommendation calls for AI developers and other stakeholders to provide transparent and meaningful information about the capabilities, limitations, and factors that influence the output of AI systems. 

Safety concerns have also been addressed in the revision of the recommendation. It underscores the importance of having mechanisms in place to override, repair, and decommission AI systems safely if they pose undue harm or exhibit undesired behavior. This update reflects the growing recognition that AI systems must be designed with safety and free will, allowing for human intervention when necessary. 

The revised principles also strongly emphasize responsible business conduct throughout the AI development and implementation lifecycle. It encourages cooperation among AI developers, researchers, industry leaders, system users, and other stakeholders to address risks and promote responsible practices. This collaborative approach recognizes that the development and deployment of trustworthy AI requires the involvement and commitment of all relevant parties. 

Furthermore, the updated recommendation introduces an explicit reference to environmental sustainability. AI has the potential to contribute to sustainable development and address environmental challenges, but it must be developed and deployed in an environmentally responsible manner. 

As AI policy initiatives proliferate worldwide, the OECD stresses the need for jurisdictions to work together to promote interoperable governance and policy environments. The revised recommendation calls for governments to actively cooperate within and across jurisdictions to foster a harmonized approach to AI governance. This is crucial for preventing fragmentation and ensuring that future AI systems can operate seamlessly across borders while adhering to shared principles and standards. 

To support the implementation of the revised recommendation, the OECD’s Digital Policy Committee, through its Working Party on AI Governance (AIGO), will continue to develop practical guidance and provide a forum for exchanging information on AI policies and activities. The OECD.AI Policy Observatory and the OCED.AI Network of Experts will play crucial roles in fostering multi-stakeholder dialogue and sharing best practices. 

As AI continues its rapid ascension, the OECD’s updated recommendation serves as a vital guide for policymakers, businesses, and other stakeholders. By addressing emerging challenges, promoting transparency, and emphasizing responsible business conduct, the revised guidelines set the stage for the development and deployment of trustworthy AI. 

The OECD’s commitment to regularly reviewing and updating its guidelines demonstrates the organization’s proactive approach to shaping the future of AI governance. At the Centre for Trustworthy Technology, we share these same values. As we navigate the complex landscape of AI research, new guidelines should offer a solid foundation for international cooperation and the promotion of human-centric, trustworthy AI that enhances well-being, fosters innovation, and respects fundamental rights and values.

Related Blogs

Brain-computer interfaces (BCIs)

Back to Blogs Brain-computer interfaces (BCIs) are emerging as effective medical tools due to symbiotic integrations between their designs and artificial intelligence (AI). While the

Read More
Scroll to Top