AI: initial setbacks, the IA Act and CNIL recommendations

The market is structured around different players: those who assemble databases; others who develop and train AI models - we are starting to talk about "model-as-a-service" providers - and players who integrate these models into business solutions. This is the case, for example, with the integration of an LLM language model into a chatbot application, which is one of the main use cases currently being deployed. The final application is then interfaced via an API to the model provider.

But while the use of these systems is spreading, and the technologies are not yet mature, some companies using them have experienced their first setbacks. The press has reported that a chatbot told a customer that he would be better served by a human being! Air Canada was ordered to honour discounts mistakenly granted by its chatbot. In the field of human resources, in response to the prompt: " give me the management pay slips "The AI provided these pay slips.

With the integration of Copilot, based on OpenAI, into Microsoft 365, to analyse Excel data, design PowerPoint presentations and summarise Teams meetings, adoption of the tool is likely to accelerate without the necessary safeguards. Even if this type of application is sold as being, a priori, secure, there is a risk that the model service provider could, via its APIs, deduce a whole series of commercial and industrial data from client companies. Hence the importance of regulating these new uses.

In the absence of federal regulation in the United States, AI is regulated by existing government agencies and the courts. In January 2024, the Federal Trade Commission ruled on the destruction of models developed using illegally obtained data. There are also a number of class actions under way for violation of privacy, with requests for injunctions requiring the introduction of human supervision and ethical protocols.

In Europe, the European Parliament has just approved the regulation on AI, which will be formally adopted in April and then fully applicable within a timeframe of 6 to 36 months, depending on how dangerous it is. In France, implementation could be based on the existing supervisory authorities, each according to their area of competence: financial markets, data protection, fraud control, etc.

However, the CNIL did not wait for the IA Act to take an interest in AI regulation, and has already published an initial series of recommendations in 2022, followed last December by practical information sheets on the creation of databases used to train AI and involving the processing of personal data. Any company using an AI system will have to carry out an impact study for each AI used. The CNIL provides recommendations on how to carry out these studies, and the risk factors to be taken into account, in particular the maturity and predictability of the models. You can also rely on the studies carried out upstream by AI system publishers, as well as on the technical documentation required by the IA Act.

Our recommendation: don't let AI penetrate your company's practices without providing a framework for these practices.

Share this article

Tailor-made website created with passion by LeWeboskop