The use of AI and generative AI tools does not change the English legal position in respect of confidentiality, privilege or professional obligations. However, using AI and generative AI tools impacts the analysis of these principles and raises novel issues to be considered, such as how to manage the increased risk of loss of confidentiality and/or privilege due to a significant increase in the volume (and speed of creation) of written material. Use of public or open-source generative AI tools creates greater risks, as any input to such a tool will lose confidentiality, which is a key cornerstone of legal privilege. Use of a closed or proprietary generative AI tool means that access to the tool (and its inputs/outputs) is limited to that organization, which limits confidentiality concerns but does not alleviate the risk of a loss of privilege.
As such, inputs by lawyers into generative AI tools will generally not be covered by privilege under English law, as there is no communication with a lawyer in this scenario. While there are some exceptions to this, including lawyers' working papers, the law in this area is not well settled.
The analysis of outputs by generative AI models leads to the same conclusion, as a generative AI tool is not a lawyer, and therefore work product generated by such a tool cannot be privileged.