The upcoming Getty Images v Stability AI trial, which is scheduled to begin during the week commencing 9 June 2025, is expected to be a key UK case at the intersection of AI and copyright, and will undoubtedly draw significant attention from both creative and technology businesses.
The case could determine, amongst other things, if and how generative AI operators can use copyright materials to train their AI models within the UK. In addition, the case will consider other key legal issues which may shape the future interplay between AI and copyright - set against the backdrop of the UKIPO and the UK’s legislature actively consulting with industry on the best way of shaping UK law to strike a fine balance between ensuring the UK is perceived as an attractive place for companies to invest in developing AI, and the need to protect intellectual property rights.
This article is intended to ‘set the scene’ ahead of the impending trial, summarising the issues we expect to be examined by the Court, and exploring some of the possible consequences of the Court’s judgment.
Background
Various entities in the Getty Images group (“Getty”) brought a claim against Stability AI, a UK-based AI company best known for its text-to-image model ‘Stable Diffusion’ (“SD”), alleging that Stability AI systematically scraped millions of images and related data from Getty’s websites without permission in order to train SD. The claims include allegations of copyright, database right, and trade mark infringement, as well as passing off. Stability AI denies all allegations, including any unauthorised use of Getty’s content or misuse of Getty’s trade marks.
This article focuses, in particular, on the key issues related to Getty’s copyright claims.
Key Issues for Trial
The Court will be tasked with considering various key legal and factual issues – many of which are specific to the dispute. As well as determining the issue of Getty’s standing to bring certain of its claims the Court will be asked to reach a determination on the following key points:
(i) Supposed ‘holes’ in Getty’s pleaded claims
Stability AI alleges that there are many ‘gaps’ in Getty’s infringement allegations, including a lack of specific evidence of infringement in its pleadings. The Court will be tasked with closely examining the specific alleged acts of infringements put forward and deciding who was responsible for each act (for example, Stability AI has pleaded that certain third parties were involved in the training of SD).
(ii) Jurisdictional issues
A central question is whether, as Getty asserts, the alleged acts of copying by Stability AI took place in the UK. This includes the downloading and use of Getty's content on UK-based servers, as well as the targeting and availability of the SD model to UK consumers, which Getty contends are sufficient to bring the dispute within the jurisdiction of the UK courts. Stability AI disputes this, maintaining that the development and training of SD occurred outside the UK, using datasets stored and processed abroad. The Court will therefore need to undertake a technical review of how Stability AI accessed any Getty works, and the physical location of any servers used to facilitate this training process.
Where the act of copying took place outside the UK, we would expect the Court to find itself precluded from assessing liability in respect of such copying. However, in the context of FRAND licensing, the High Court in the Unwired Planet case took an expansive view of its jurisdiction – so it will be interesting to see how the Court deals with the jurisdictional issues at play in this case.
In addition, Getty asserts that SD itself is an infringing copy/article under the UK’s secondary copyright infringement regime and that, as a result, Stability AI has infringed copyright by, for example, importing SD into the UK. Stability AI was unsuccessful in striking out Getty’s secondary infringement claims in an earlier application (see our article on this here), so it remains for the Court to determine if the secondary infringement regime can stretch to cover dealings in intangible ‘things’.
(iii) Liability for output and facilitation
Another key issue is whether Stability AI is liable for copyright infringement in the outputs generated by SD users. Getty argues that Stability AI is directly responsible, not only for training the model on Getty’s works, but also for authorising further infringement by making SD available to users who may generate images reproducing Getty’s works. On this basis, Getty asserts that Stability AI cannot avoid liability simply because users generate the outputs. By contrast, Stability AI contends that any liability rests with the user who generates the output, not with Stability AI itself. It maintains that it has taken reasonable steps to prevent infringement, including prohibiting unlawful activity in its terms of service.
Previous UK case law, such as CBS Songs Ltd v Amstrad, suggests that merely providing technology capable of both lawful and unlawful uses does not amount to authorising infringement. In line with this principle, Stability AI argues that it is not liable because its role is limited to offering access to SD and simply providing the means for users to create imagery (which may or may not reproduce Getty’s work, depending on what ‘prompts’ the user inputs to SD).
(iv) Whether Stability AI can rely on defences: infinitesimal use, pastiche, caching, and hosting
The Court will be required to assess the viability of several defences advanced by Stability AI, including the principal defences set out below. These defences are central to the question of whether Stability AI is or is not liable for the alleged unauthorised use of Getty’s works in both the training and output of SD.
- Infinitesimal use
Stability AI contends that, considering the vast amounts of non-Getty-owned materials used to train its model, any reproduction of Getty’s works in SD’s output would relate to an “infinitesimally small part of the expression of that [c]opyright [w]ork”. Stability AI argues that such reproduction cannot constitute the intellectual creation of any particular author or a substantial part of any original work and, as a result, cannot amount to copyright infringement. Stability AI also maintains that the outputs are generated from random noise images, and any resemblance to Getty’s works is coincidental.
Getty, however, disputes this, arguing that the outputs sometimes closely resemble or even replicate substantial parts of its protected works – including distinctive features such as watermarks and trade marks. Getty maintains that the scale and nature of the copying—both in the training process and in the generation of outputs—mean that the outputs are not merely trivial or insubstantial, but rather amount to the reproduction of substantial parts of its works. The Court will have to determine on ‘which side of the line’ any such copying falls, and whether any liability attaches to such use.
- Pastiche
Stability AI also relies on the statutory “pastiche” fair dealing defence under UK copyright law, submitting that, even if any output from Stable Diffusion resembles Getty’s works, such output should be protected as a pastiche (which was described in the Only Fools case as either imitating the style of another work or being a medley or combination of a number of different works). Stability AI asserts that any resemblance to the original work in the outputs is rare, unintentional, and not a substitute for the original work, and that such use constitutes fair dealing as it does not harm the market for the original work.
Getty, by contrast, challenges the applicability of the pastiche defence, arguing that the scale and nature of the copying goes far beyond what is permitted under this defence. Getty contends that the outputs are not genuine pastiches but unauthorised reproductions that compete directly with the original works, and therefore fall outside the scope of the statutory defence.
- ‘Caching’ and ‘hosting’ safe harbours
Stability AI further seeks to rely on the ‘caching’ and ‘hosting’ safe harbours provided under the UK’s e-commerce regulations. Stability AI argues that it acts as a neutral intermediary for user-generated content on, for example, the DreamStudio platform, and that it should not be held liable for infringing acts committed by users of its hosted services. Stability AI maintains that the injunctions sought by Getty would require it to monitor all user activity, with which it would be impossible and/or disproportionate for Stability AI to comply, and which would be contrary to established legal principles prohibiting general monitoring duties for service providers.
Getty disputes the availability of these safe harbours, asserting that Stability AI’s conduct goes beyond the passive hosting or caching of content. Getty argues that Stability AI is actively involved in the generation of the outputs generated by users, since such outputs are generated using SD (and Stability AI is actively involved in the development, deployment, and commercialisation of SD) and, as such, Stability AI should not benefit from the protections afforded to neutral intermediaries.
Final Thoughts
While many of the key issues that the Court will be asked to determine in the upcoming trial are very fact-specific – and may be subject to a possible appeal – the judgment may give an indication as to the future landscape for the use of copyright works in the UK for the purpose of training AI models, including whether such use requires a licence going forwards. There is no doubt that the UK government will be taking a keen interest in the case as they evaluate the multitude of responses to their consultation on the future of UK copyright law as it relates to AI. Of course, it remains open to the UK government to change the position under UK copyright law, including by introducing a copyright exception specifically in respect of such use.
Getty is also pursuing similar parallel proceedings against Stability AI in the US, although the US litigation is at a less advanced stage. As a result, the Court’s judgment in relation to the upcoming UK trial may have an impact on the US proceedings.
The Court’s decision as to the applicability of the various defences put forward by Stability AI will also be of interest to many other concerned parties. A finding that one or more of Stability AI’s pleaded defences provides a shield from infringement will likely have ramifications for those whose business models are based on the licensing of rights in copyright-protected content – and, depending on which defences succeed, may result in such businesses having to accept that their works may legally be used in the UK for the purpose of training of AI models.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.