The controversy surrounding Elon Musk’s chatbot Grok has sparked renewed debate about the legal challenges posed by artificial intelligence (AI). It has exposed a gap in UK criminal law, highlighted government hesitation and demonstrated the consequences of delayed legislative implementation in an area where technology has rapidly outpaced regulatory control. 

Henrietta Ronson

Henrietta Ronson

Manvir Kaur Grewal

Manvir Kaur Grewal

In December, the infamous ‘put her in a bikini’ trend spread rapidly across X. Users instructed Grok to generate images in which clothing was removed from existing images of women and children. What was framed by some as novelty or humour amounted to the non-consensual sexualisation of identifiable individuals. While ‘nudification’ technology is not new, X’s scale and accessibility succeeded in mainstreaming this behaviour.

As early as 2019, the ‘DeepNude’ software demonstrated AI’s capacity to generate sexually explicit images without consent. Despite its withdrawal following a backlash, similar services emerged, many using the DeepNude source code made publicly available by the original developers. 

In 2020, the website ‘DeepSukebe’ was launched. It advertised itself as an ‘AI-leveraged nudifier’ to ‘make all men’s dreams come true’. This prompted Maria Miller MP to call for a parliamentary debate on whether the images should be banned.

By 2023, advertising for nudification services had increased by over 2,400% on major platforms. A recent report by the Tech Transparency Project found 55 nudify apps on Google Play and 47 in the Apple store, generating substantial revenue. The scale of the harm was neither new nor unexpected.

Last June, the government enacted section 138 of the Data (Use and Access) Act 2025, criminalising the intentional creation or requesting of ‘purported intimate images’ (often referred to as deepfake pornography) of an adult without their consent. However, it had no operative legal effect until 6 February 2026, when the law was finally implemented.  

The delay is an uncomfortable truth for which the government has offered no meaningful justification. The most plausible explanation is political hesitation. UK AI policy has repeatedly emphasised the importance of maintaining a pro-innovation regulatory environment which prioritises economic growth and promotes global competitiveness. 

The Data (Use and Access) Act 2025 confronts a longstanding deficiency in the law, which had previously focused on the sharing of intimate images rather than their fabrication. This legislation recognises the harm that can be caused at the point of generation, even if it is never shared.

The offence operates alongside the Online Safety Act 2023, which establishes a regulatory framework requiring platforms to assess risk, mitigate harms and remove illegal content. These regimes create a dual system: individual criminal liability reinforced by structural obligations placed on platforms that enable such conduct.

Some commentators have framed the regulation of sexual deepfakes as a threat to free speech. Article 10 of the European Convention on Human Rights protects the right to freedom of expression, including the freedom to hold opinions and to receive and impart information and ideas. However, this right is expressly qualified, permitting restrictions where they are prescribed by law and necessary in a democratic society for legitimate aims such as the prevention of crime and the protection of the rights of others. The jurisprudence of the European Court of Human Rights has consistently emphasised proportionality. Therefore, any interference with expression must pursue a legitimate aim and go no further than is necessary to achieve it. 

Deep fake

Source: Alamy

Within this framework, criminalising the creation of non-consensual sexual deepfakes is unlikely to raise significant difficulty. It does not restrict satire, political speech or artistic expression. When deployed in this context, free speech appears less as a protection of democratic expression and more as a rhetorical shield against accountability for harmful conduct. 

Despite the strengthened framework, enforcement challenges remain.

Jurisdiction: many AI systems used by UK residents are developed and operated overseas. Although UK criminal law may apply where there is a domestic connection, identifying offenders, particularly where content is generated privately, will present difficulties. An offence is only meaningful if it can be investigated and prosecuted with realistic prospects of success.

Platform accountability: the Online Safety Act limits enforceability. The framework remains heavily weighted toward reactivity rather than proactivity. While Ofcom may require risk mitigation and content removal, the regime does not mandate the withdrawal of technological functionalities capable of generating non-consensual sexual deepfakes so long as they demonstrate adequate moderation after the fact. For example, Grok is unlikely to be banned outright as it provides many services, including the provision of general information and entertainment.  

Evidential barriers: establishing who prompted an AI system to generate a particular image will require access to platform-held data, including user identifiers, prompt histories and IP addresses. If content is generated privately and not shared, detection will be exceptionally difficult, if not impossible. Without clear disclosure obligations and timely cooperation from platforms, particularly those based overseas, investigations cannot be pursued effectively.

The UK response to sexually explicit deepfakes illustrates the growing gap between technological capabilities and legal protection. The new criminal offence attempts to close a significant lacuna by recognising the seriousness of non-consensual AI-generated sexual imagery.

However, the Online Safety Act is ill-suited to achieve realistic success due to its technical feasibility and scope limitations. Effective regulation requires criminal liability and preventative obligations that restrict harmful functionality at source. In their absence, enforcement remains fragmented and deterrence limited.

 

Henrietta Ronson is a partner and Manvir Kaur Grewal an associate at Corker Binning