AGORA DSI - La Minute Légale - 13 December 2023
The use of AI has become widespread in both the private and professional spheres, without any framework.
These AIs work by processing large quantities of data, both at the model training stage and in production. This data may include data protected by intellectual property rights, personal data, confidential data or data corrupted by malicious actors. The use of an AI system therefore raises a number of risks that need to be managed: the quality, security and legality of the data that feeds the AI; the intellectual property of content generated by an AI; the confidentiality of company data processed by an AI in the environment of a SaaS provider.
Europe is the first in the world to have taken up this issue.
The draft European regulation on AI, or "AI Act", which has been under discussion for two and a half years now, was the subject of an agreement between the European institutions on 8 December. The European Parliament defended a vision centred on the protection of freedoms, while the European Council and certain states, including France, defended an approach concerned with preserving the competitiveness of European businesses.
This regulation will be similar to the RGPD in that its application will be extraterritorial and will set an initial standard for non-European players in the sector. However, the IA Act adopts a differentiated approach, based on risk, with 4 levels of IA: prohibited, high-risk, limited-risk and low-risk.
AI used to rate citizens, such as those used in China, AI used for predictive policing and AI used to recognise emotions in the workplace will all be banned. High-risk AIs include systems used for access to education, essential services and recruitment.
The obligations will depend on the risk and the size of the players involved. Companies specialising in AI will be affected, as will user companies, which will have to carry out impact analyses for high-risk systems.
One of the hard points in the negotiations concerned "foundation models", a form of large-scale generative AI trained on a large quantity of data. The compromise will provide a specific framework for these models.
The final version of the IA Act is expected before the European elections, and will gradually come into force by 2025.
In the meantime, the risks mentioned in the introduction need to be addressed without delay. The first step is to map the uses of AI systems within the company, to inform and train employees and formalise the rules by revising the IT charter.