Codes of Practice for the AI Act


The EU’s AI Office created guidelines for the use of artificial intelligence (AI) across the Union. These guidelines cover how to keep AI information up-to-date, how to describe the data used to train AI, and how to identify and manage potential risks.  As a practical document, the Codes of Practice present an AI Act tool to bridge the interim period between obligations for General Purpose AI (GPAI) model providers and the eventual adoption of harmonised European GPAI model standards. This blog provides a wider picture of what is and what way this type of document can serve.

What are Codes of Practice? 

The Codes of Practice are a form of compliance mode pending to fill the gap between when the obligations of GPAI model providers begin under the act (12 months) and the implementation of standards which may take about 3 years or more. The standards proposed by the Codes of Practice shall provide the GPAI model providers’ presumption of conformity to its obligations under Articles 53 and 55 excluding legal requirements until the standards enter into force. 

Meanwhile, the standing process of creating the European Standards at the same level of compliance to operationalise the Act’s requirements has been initiated. Although the Commission recently launched an official standardisation request approved by CEN-CENELEC concerning AI system standards, no similar standardisation request has been proposed for GPAI model standards. When such a standardisation request is issued, it mainly depends on the capacity of the Codes of Practice to fulfil the related obligations provided in the AI Act.

Why do we need Codes of Practice? 

It is important to note that Codes of Practice from the AI Act are relevant as they put in place measures which give direction and help in coming up with uniform standards for developing, implementing and using AI across industries. It is the case that such codes ensure that the AI systems are safe, reliable, and trustworthy, especially in the use cases that affect public safety, health, or welfare. They bring out the ethical issues that concern the welfare of people, for instance, fairness, transparency, and accountability to show that AI technologies uphold human rights with no discrimination. In addition, Codes of Practice provide the basis for compliance with regulations since they enable organizations to avoid misuse and potential risks related to AI. They also encourage innovation in this regard that decreases the level of risk and helps organizations to know how to ethically design and deploy AI. Also, these codes help in international coordination, meaning they make standards to be in line across the world and urge coordination among countries. The more these codes are set and adhered to the more confidence the public will have for the artificial intelligence technologies hence enhancing usage. 

Future Directions

As the Act is still yet to be published the public can expect to get more information later on in the process of drafting, roles that will be played by various individuals, and another related probable timeline. The AI office will ensure reviewing and updating of the Code of Practice. As it is clear that a standardisation process will take more time than it is provided by the AI Act, Codes of Practice for GPAI model providers will be an efficient instrument for the accurate use of the regulation. 

More information regarding these Codes will be released later but as of now, it can be seen that these codes will be at the heart of configuring the directions of the AI regulation in Europe.