top of page

Due Diligence for Responsible AI

Pour Demain outlines the foundations of a risk management framework upon a request by the OECD.




Key Recommendations


Pour Demain welcomes that the OECD is starting to draft a Due Diligence Guidance for Responsible AI report. Our main recommendations on the draft are as follows:


  • To reduce the compliance costs of smaller AI providers while ensuring that the most far-reaching AI systems are appropriately managed, the guidance should direct a specific focus on general-purpose AI (GPAI).  

  • To ensure diligent risk assessment similar to established industries, such as aviation or car manufacturing, evaluations by independent experts of GPAI models are strongly recommended.  


  • To increase accountability, GPAI providers should nominate a compliance officer and establish safety policies.



Building Responsible AI: Strengthening the OECD's Due Diligence Framework

The Due Diligence Guideline for Responsible AI is a valuable step in ensuring proper risk management in the AI industry. Established risk management in other sectors such as nuclear, aviation, and pharmaceuticals can serve as good examples. The OECD, spearheaded by the trailblazing AI Policy Observatory, has proven to be a role model with initiatives such as the AI Principles, the AI Incidents Monitory, and the AI Accountability Report. As the OECD has demonstrated with its updated AI Principles, policy recommendations have to be revised regularly to stay relevant due to the rapid speed of AI development.


General-Purpose AI

Risk management is crucial for general-purpose AI (GPAI), as defined in the EU AI Act. This form of AI usually sits at the top of digital supply chains, can perform a wide range of tasks and can pose various risks, especially to fundamental rights, user safety, and electoral processes. Therefore, a focus on general-purpose AI could alleviate the compliance costs of the overwhelming majority of AI providers who don’t have the resources to train GPAI models. 


Model Evaluations as an Integral Part of Risk Management

Pour Demain recommends rigorous model evaluations for general-purpose AI  as an essential part of risk assessment. An important part of these evaluations needs to be conducted by independent experts before the deployment of the AI system, while the intellectual property of AI providers has to be protected. 


Access for independent experts is best practice in mature industries such as aviation or motor vehicles. By following these examples, the AI industry can demonstrate the safety of its products in an established way to increase trust.


From Pour Demain's point of view, proportionate measures regarding risks are appropriate under a standard risk management framework. Our work with the OECD exemplifies our focus area of international AI governance. We urge all stakeholders, especially academics, to join the OECD.AI's Network of Experts to contribute their expertise.



Contact

Jacob Schaal


bottom of page