In an era when artificial intelligence is reshaping legal practice at speed, AI literacy is also essential. Law firms are adopting AI faster than ever. Predictive analytics tools are being used to assess litigation risk. Contract review platforms powered by large language models are replacing hours of associate time. Regulatory compliance is increasingly automated.

However, this rapid adoption has brought problems of its own. In professional contexts, there have been complaints of AI ‘workslop’ (careless or uncritical use of AI-generated content produced by trainees and creating more work for supervisors). More seriously, lawyers and judges have suffered reputational damage through citations of AI-hallucinated cases that do not exist. The technology is powerful, but only in the hands of someone who understands its limitations.
Yet most law graduates who enter these firms cannot use AI in an efficient or critical manner, and cannot hold a meaningful conversation with the developers building the tools they rely on. They can draft an impeccable contract but cannot explain what an algorithm is. They can apply GDPR principles to a fact pattern but cannot assess whether a supplier’s ‘appropriate technical measures’ are adequate. They are highly trained in law but lack technical skills.
At Nottingham Trent University, we were aware of this gap in designing a new Master’s-level module in cyber law and computer science for lawyers. The module integrates substantive cyber law (including data protection, cloud computing, electronic contracts, quantum computing, cybersecurity and infrastructure protection) with foundational Python programming instruction. Critically, it does not aim to produce competent programmers. Students receive only six contact hours of coding instruction across a 10-week module. What it aims to produce instead is what might be called ‘explanatory competence’: the ability to understand and communicate about technical systems, even if you could not build them yourself.
What is different now is that AI has dramatically lowered the barrier to coding. By providing students with one-to-one support through an AI assistant, Lumi Tutor, trained in the module’s Python coding content, students can ask as many questions as they want and receive clear explanations and worked examples at their own pace. A law student who could not previously have written a line of Python without months of dedicated study can now, with AI assistance, understand and explain how code accomplishes different tasks, and engage meaningfully with technical systems. Crucially, in doing so, they also gain genuine AI literacy skills: they learn to use AI as a tool, interrogate its outputs critically and understand where it can mislead. This is precisely the antidote to the ‘workslop’ problem. The knowledge gap has not disappeared, but the time and effort required to bridge it has fallen sharply.
This is important for legal education. The Solicitors Regulation Authority has acknowledged ‘technology and innovation’ skills as a strategic priority. The Legal Education and Training Review identified technology skills as a gap in traditional provision more than a decade ago. What has been lacking is a viable model for addressing it: one that works within the constraints of legal education without turning law schools into computer science departments. Barriers to including technology skills in law curricula have long included time and difficulty: there are only so many hours in a course, and teaching students to code requires individual support, which AI can now offer. It is thereby possible to develop genuine technical literacy: the ability to understand, explain and communicate about code in a fraction of the time it would previously have required.
The module’s coding assessments test explanation, not implementation, at least for students without prior coding experience. Students are asked to explain how tasks can be automated using Python. This approach recognises that in professional contexts, lawyers are not going to be asked to write code: there are professionals who can do this, and AI can now assist effectively with routine coding tasks. Rather, the focus is on closing the communication gap described above, and on developing the critical AI literacy that the profession urgently needs.
Teaching lawyers to understand code does not require them to become programmers, but to work effectively with those who are. Developing AI literacy is also vital. This is achievable, professionally valuable and overdue. The next generation of lawyers does not need to choose between understanding law and understanding technology. With the right curriculum, and with AI as an ally, they can do both.
Professor Rebecca Parry is director of the Centre for Law, Emerging Technologies and Business at Nottingham Law School























No comments yet