AI should form an integral part of a firm’s risk policy, the Law Society’s risk and compliance conference heard. Data security and precautions are increasingly scrutinised when PII is renewed
Reports of lawyers citing AI-generated cases and putting client details into open generative AI systems have drawn the attention of the regulator and courts, solicitors heard this week. But insurers read such reports too – and as law firms renew their professional indemnity insurance, the repercussions are impossible to ignore.
‘I couldn’t believe the number of questions on the renewal form this year about AI use,’ one firm owner told the Gazette. ‘They wanted to know policies, risk plans and what staff were doing with it. I’m not sure I had all the right answers.’
Insurers and compliance professionals agree that AI is now an important component of assessing a firm’s risk profile. But has the profession quite grasped that this is something which has to be managed and supervised?
Last year, High Court judge Mr Justice Ritchie said solicitors and barristers involved in deploying fake case citations had demonstrated ‘appalling professional misbehaviour’. Last month, two immigration solicitors were referred to the Solicitors Regulation Authority for apparently using generative AI to create irrelevant or false cases. One also admitted to putting emails from the Home Office with client details into ChatGPT to try to explain them.
At the Law Society’s risk and compliance conference this week – an event attracting an audience likely to be switched on to regulatory perils – 14% of delegates in one poll agreed that AI was ‘allowed but largely unmanaged’. In another session, discussing who was accountable for managing AI use, almost half said this was the responsibility of individual fee-earners, with just 24% believing this was the role of the supervising or managing partner.
Arjun Rohilla, senior vice president of broker Paragon, said the results would be ‘frightening’ for PII insurers.
'Generative AI should be seen as a tool to support professional judgement, not a replacement for it'
Olivier Roth, Solicitors Regulation Authority
So what are law firms being required to say about their AI policy on renewal forms?
Marc Rowson, a partner with insurance broker Lockton, stressed that underwriters are not trying to catch law firms out or scare them by asking more questions. ‘One of the biggest myths is that insurers are concerned about firms using AI,’ he said. ‘The vast majority are excited by it and want firms to embrace it.’
However, insurers will want to know about three things, said Rowson: the accuracy of the work being done, data security and precautions. ‘They want to see where is the human element of verification and security. The insurance market is still in its fact-finding stage. If you look at the market in general, the real question is: do you have a risk policy? It doesn’t seem to be going into the granular detail of what that looks like.’
Rohilla told the conference that insurers are looking for clear answers about AI use rather than vague terms such as ‘experimenting’ with the technology.
The SRA is expected to publish new guidance on safe and compliant use of AI in the coming weeks. This should help to clarify the rules relating to generative tools, but will also make clear that client confidentiality and privilege, as well as client consent, remain non-negotiables. The use of AI does not transfer solicitors’ responsibility for maintaining these; and firms remain fully accountable for the quality, accuracy and appropriateness of the services they provide. Generative AI ‘should be seen as a tool to support professional judgement, not a replacement for it’, Olivier Roth, SRA policy manager specialising in AI and tech, said in a recent webinar.
For now, experts say the key to satisfying insurers is getting the fundamentals right and clarity over how risks are managed.
Eloise Butterworth, head of risk and compliance at consultancy HiveRisk, said that some firms can get carried away with innovation without first putting an effective and robust risk framework in place.
‘The basics the insurers are going to want to know: do you have an AI policy and has the COLP had input on this from a regulatory perspective? AI use shouldn’t sit solely with IT and the innovation teams – it should very much be on the risk team’s agenda. More importantly than having a policy – is that AI policy effective? So, what does it say and what controls are in place to prevent usage which is not allowed. I think insurers will be more concerned about firms who say they’ve banned it/aren’t using it – because in reality that just means their staff are going rogue and using it without any guardrails.’





























1 Reader's comment