15.2 C
New York
Wednesday, November 5, 2025

Getty Loses Legal Case Over Generative AI Copyright Infringement


With generative AI tools becoming more and more commonplace, it’s worth also considering the legal implications of using such, and what the restrictions might be on AI replicas and re-creations.

And right now at least, those restrictions are pretty loose, with every high-profile legal case brought against AI companies thus far falling to establish legal precedent around copyright violations in relation to intellectual property being replicated by such tools.

Yet, at the same time, you also can’t copyright your own generative AI content either, so anyone can technically take your generated content and reuse it as they please.

The latest major legal case on this front comes from the U.K., where Getty Images has this week lost a lawsuit against Stability AI, over the Stability team reportedly scraping Getty’s content for use in its AI model.

As reported by Reuters, stock image provider Getty had accused Stability AI of using its images to train its Stable Diffusion system, which generates images from text inputs.

The case itself is actually technically straightforward: Getty claimed that URLs of Getty images were included within Stability AI’s LAION dataset, which is the training model that powers its β€œStable Diffusion” v1 and v2 image generation tools. Getty further claimed that some re-creations made through Stable Diffusion even included a variation of the Getty watermark, which is further evidence that its copyright-protected works had been used to build this model.

Yet, the judge found that the examples presented were too limited to implement blanket rulings over infringement, essentially noting that without specific cases of copyright infringement, and direct harm caused by such, there’s no real case to answer in most elements.

Which is very similar to what a U.S. federal court ruled on another AI copyright case back in June, with Meta and OpenAI also pursued along similar lines.

Back in 2023, a group of authors, including high-profile comedian Sarah Silverman,Β launched legal action against both Meta and OpenAIΒ over the use of their copyrighted works to train their respective AI systems. The authors claimed that they were able to demonstrate how these AI models were capable of reproducing their work in highly accurate form, which, in their view, demonstrated that both Meta and OpenAI had used their legally protected material without consent. The lawsuit also alleged that both Meta and OpenAI removed the copyright information from their books to hide this infringement.

In its assessment, the federal court ruled that, for Meta’s part at least, the company’s use of these works is for β€œtransformative” purpose, and that Meta’s tools are not designed to re-create competing works.

As per the judgment:

β€œThe purpose of Meta’s copying was to train its LLMs, which are innovative tools that can be used to generate diverse text and perform a wide range of functions. Users can ask Llama to edit an email they have written, translate an excerpt from or into a foreign language, write a skit based on a hypothetical scenario, or do any number of other tasks. The purpose of the plaintiffs’ books, by contrast, is to be read for entertainment or education.”

So the argument then is that this case is seeking to prosecute the tool, not the purpose. As a basic comparison, knives can kill people, but you can’t take legal action against a knife maker for providing a harmful tool in the case of a murder, as it has been used in a way that it was not intended for.Β 

As such, the judge ruled that because the re-use of the works was not intended to create a competing market for these works, the application of β€œfair use” in this case applies.

This is essentially the same conclusion that the U.K. court has come to, that while AI tools could be found to be producing work that violates copyright, and that could form the basis of a case against their developers, the cases presented thus far have not been able to establish this as an intended outcome of such projects. As such, there’s no definitive legal case for a broad-reaching ruling on violations.

So again, there may be a case where somebody’s IP is essentially stolen through such process, but that would only be enforceable on a case-by-case basis, where the individual creator could demonstrate direct harm to their business as a result.

Which means that AI tools, at least right now, are able to create similar-looking content to what’s available elsewhere, without legal recourse. Though at the same time, there are logical limitations on this. If you were to put in a prompt like β€œMickey Mouse steals a car,” then you used that in a commercial, Disney would likely be able to sue you for direct infringement, as it’s a blatant copy which would then impact the value of their IP, a direct, singular case example in which copyright protected work has been used, and has caused harm to the business.

As an individual case, that’s something to answer for, but as an overarching rule, that’s not the intended design of generative AI tools.

So while it’s not a free-for-all, as such, the legalities of generative AI use are still pretty loose, which means that most applications, aside from direct misrepresentations of known characters/people/entities, are likely fine in a legal sense.

This is not me giving you permission to go crazy, as legal consequences could follow, again, in specific use cases. But avoiding known IP in your generations should ensure that you remain legally safe, based on these findings at least.

Though again, as noted, you also can’t file copyright claims for your own generative AI work, no matter how much of a β€œprompt artist” you might consider yourself to be. Β Β 

Back in March, an AI poetry author sought damages for the reuse of his work without permission, claiming that his generated content had been essentially stolen for commercial reuse.

But because current copyright laws require a human to be the creator of a work, the claimant lost his case, underlining a key gap in the current law.

As the judge explained at the time:

β€œBecause many of the Copyright Act’s provisions make sense only if an author is a human being, the best reading of the Copyright Act is that human authorship is required for registration.”

A three-judge panel also noted that machines cannot be granted copyright because they β€œdo not have lives”, meaning that the length of any copyright operability cannot be measured.

That finding has been underlined in various AI cases, that according to the Copyright Office, a human must be at least the primary creator in any case where copyright can be granted. Now, you could argue the degree to which writing a prompt can be considered significant in this respect, but the bottom line is, copyright law is currently not designed to offer protection to non-human entities, and until a case is brought that can present a solid argument as to why this should change, you don’t own the rights to any of your AI generated works.

So AI content is essentially sitting on the fringes of legality in this respect, and while a specific case may be presented to highlight commercial loss as a result of such, which could establish an alternate legal interpretation, right now, there are no definitive protections for creators, on either side of the debate.

Which means you can use AI tools to generate content free from consequence, for the most part, but you also don’t own the rights to anything that these tools generate.

*Disclaimer: This is not legal advice, and you should seek independent counsel for specific queries.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp