● Citing the example of the independently developed tool CNIL GPT, he discusses the limits of AI with regard to the understanding of law and legal contexts.
● He points out that ChatGPT is by no means able to provide relevant answers to legal questions, but that it is useful for proofreading computer code.
There has also been a lot of articles in the media about the impact of AI, and its potential to transform and even put an end to certain legal professions. What do you think of these reports?
Pierre Desmarais. The legal professions are not quite the same in France as they are in the United States, notably with regard to assistants and paralegals. In the United States, procedures can be used to oblige parties to present all the relevant evidence for the resolution of legal disputes. This is often the case in intellectual property proceedings where lawyers are faced with mountains of documents that need to be analyzed. This is where artificial intelligence tools can play an important role. In France, legal formalists are more likely to deal with administrative formalities, calls for tender, etc. So yes, AI will have a transformative impact on the legal professions, but I believe that the course of change will be gradual. In any case, it will not put an end to the need for basic competency: the new tools cannot do everything.
What are the limits of AI when it is applied to the law?
In France independent developers have created large models like CNIL GPT [now renamed DPO GPT, editor’s note], which enables users to submit questions to the database. This tool has several limits. To start with it only takes into account information that is actually recorded in the database and not soft law, like the soft law that is published on the CNIL’s blog. At the same time, it does not distinguish between the law and its interpretation. If, for example, I memorise an entire list of contact details, CNIL GPT will say that I am processing data, because the relevant law refers to “any operation on data”, however, authorities are not going require that I submit a declaration of compliance for my brain. Finally, AI is based on existing data, and this is not always enough for the law with regard to health, because in the real world we often have to contend with changes to jurisprudence that cannot be foreseen by AI. In other words, in the human sciences, AI also functions as a brake on innovation.
What do you think of AI legal tools?
I haven’t tested the French AI legal tools because I don’t want to put my files on the Internet for the simple reason that there is no guaranteed data recovery time in the event of a bug. What I know of the offers currently available from publishers such as LexisNexis is that they are modules for analysing the validity of conclusions or case law. They make queries based on jurisprudential and textual references to see how the law is evolving and provide clarifications. This is automated querying, but not generative AI.
And what about ChatGPT?
I’ve tried doing legal tasks on ChatGPT, but it doesn’t work because the tool has been trained on data from different legal systems that isn’t exclusively French. For example, if you ask ChatGPT if you need an ID card to buy a domain name in Spain, it says that you do. If you ask where it obtained this information, it’s unable to cite any sources. Professionally, I use ChatGPT to proofread my computer code when I’m developing a programme, and I use image generation tools like Midjourney and Stable Diffusion to illustrate the legal posts I publish on LinkedIn.
A formal pre-trial procedure under American law that allows parties to be request access to all relevant information (facts, acts, documents, etc.) that may be used as evidence in a lawsuit.
Commission nationale de l’informatique et des libertés (French National Commission on Informatics and Liberty).