A federal judge in San Francisco announced a significant ruling on Monday regarding Anthropic’s utilization of copyrighted books in the training of its artificial intelligence system, asserting that this practice falls under the legal umbrella of “fair use” as dictated by US copyright law.
US District Judge William Alsup ruled in favor of technology companies on a crucial issue for the AI sector, stating that Anthropic’s use of literary works by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson to develop its Claude large language model constituted fair use.
Despite this, Judge Alsup determined that Anthropic’s actions involving the copying and storage of over seven million illegally acquired books in a centralized repository did infringe on the authors’ copyrights and did not qualify as fair use. He has scheduled a trial for December to ascertain the extent of damages owed to the authors due to this infringement.
Under US copyright provisions, willful infringement could result in statutory damages reaching as high as $150,000 (approximately Rs. 1.28 crore) per work.
An Anthropic representative expressed satisfaction with the court’s acknowledgment that its AI training processes were “transformative” and aligned with copyright aims that support creativity and encourage scientific advancement.
The authors initiated a class action lawsuit against Anthropic last year, claiming that the company, which receives funding from Amazon and Alphabet, improperly utilized pirated editions of their works to train Claude for human interactions without obtaining permission or providing compensation.
This case forms part of a wider trend, with various authors, media organizations, and copyright holders launching lawsuits against major players in the AI space, including OpenAI, Microsoft, and Meta Platforms, over their training methodologies.
The fair use doctrine permits certain uses of copyrighted material without the owner’s consent under specific conditions.
This legal principle is crucial for tech companies, and Alsup’s ruling marks the first judicial examination of fair use concerning generative AI.
Proponents from the AI industry argue that their systems engage in fair use of copyrighted content to produce innovative and transformative works, cautioning that imposing fees for copyrighted materials could stifle growth within the burgeoning AI landscape.
In court, Anthropic defended its position by asserting that the incorporation of these books was in line with US copyright law, which they claim not only permits but encourages AI training as a means to enhance human creativity. The company emphasized that the purpose of its book collection was to analyze the authors’ writing styles, extract non-copyrightable information, and leverage these insights to develop advanced technologies.
Conversely, authors contend that AI firms are unlawfully replicating their content to produce rival works, consequently jeopardizing their livelihoods.
Judge Alsup concurred that Anthropic’s training methods were “exceedingly transformative.” He remarked, “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”
However, Alsup noted that Anthropic’s act of saving pirated versions of the books in a “central library of all the books in the world” transgressed the authors’ rights, as these copies were not exclusively intended for AI training purposes.
Anthropic and other major AI entities, such as OpenAI and Meta Platforms, have faced allegations regarding the unauthorized download of millions of books in digital format for training their systems.
In a previous court submission, Anthropic claimed that the origins of its book sources were insignificant concerning fair use. Judge Alsup, however, remarked on Monday that it casts doubt on whether a defendant could ever sufficiently justify why obtaining copies from piracy sources, when lawful alternatives were available, was essential for any subsequent fair use justification.
© Thomson Reuters 2025
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)