07 - Artificial intelligence
Does the law of privilege or professional secrecy protect inputs by lawyers into generative AI tools and the resulting outputs?

The use of generative AI tools by lawyers in France must be examined in light of both national rules governing professional secrecy and the emerging European regulatory framework applicable to AI systems. French lawyers are bound by the Règlement Intérieur National (RIN) de la profession d'avocat which impose a strict duty of professional secrecy and independence. These obligations apply to all information and media formats, including digital and immaterial tools, and extend to any activity performed in the context of legal advice or defense. In parallel, the Regulation (EU) 2024/1685 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act) introduces a binding framework across the EU, classifying certain AI systems used in legal services. Together, these national and European rules form the legal foundation for assessing the admissibility of generative AI use in legal practice.

When a lawyer inputs information into a generative AI tool, professional secrecy will only apply if the platform used meets the following criteria:

  • The platform preserves full confidentiality of the data (no reuse, training or third-party access).
  • The platform is operated exclusively under the control of the lawyer or the law firm.
  • The platform is used for a legal purpose falling within the defense or advisory mission.

Most generative AI tools publicly accessible today — particularly those hosted in the cloud or governed by general terms of use — do not meet these criteria. They often store, analyze or reuse the content submitted. As a result, the data entered by the lawyer is considered to have exited the privileged framework, and the use of the tool may constitute a breach of secrecy.

Even anonymized or abstracted input may present risks if the underlying platform cannot guarantee that no metadata or prompt content is stored or shared.

Moreover, the lawyer must remain in control of their legal reasoning. Where the content is produced by a third-party tool, based on unknown or untraceable parameters, it is not regarded as the product of the lawyer's intellectual activity. In that case, the protection of professional secrecy is compromised.

Therefore, under the current legal framework in France, professional secrecy does not extend to the use of generative AI tools — either for inputs or outputs — unless the following strict conditions are met:

  • The tool is exclusively controlled by the lawyer or firm.
  • Confidentiality is both legally and technically preserved.
  • The output reflects the lawyer's own reasoning and not the autonomous processing of the AI system.

In the absence of these guarantees, the use of such tools falls outside the protective perimeter of professional secrecy. As a result, lawyers are expected to exercise extreme caution and, in most practical cases, avoid using public or external AI platforms for any task involving client-related content.

In July 2024, the National Council of French Bars (Conseil national des barreaux (CNB)) issued its first practical guide on generative AI, outlining ethical and secure use of these tools by lawyers. This was followed by a more comprehensive two-volume update in September 2024: one on AI fundamentals and legal use cases, and another on evaluating AI vendors based on confidentiality, compliance and security. In June 2025, the CNB released a dedicated "AI Tools Selection Guide," reflecting ongoing professional and institutional feedback and reaffirming its commitment to responsible AI adoption in legal practice.