A recent ruling by a federal judge in San Francisco has sparked debate within the AI industry. The judge ruled that Anthropic’s use of books without permission to train its artificial intelligence system was legal under US copyright law. This decision has significant implications for the tech companies involved in AI development.
The judge, William Alsup, sided with Anthropic on the grounds that the company made “fair use” of books by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson to train its Claude large language model. This ruling is seen as a win for tech companies, as it validates their use of copyrighted material in AI training.
However, the judge also found that Anthropic’s copying and storage of over 7 million pirated books in a “central library” constituted copyright infringement and was not fair use. As a result, a trial has been scheduled for December to determine the amount Anthropic owes for the infringement.
US copyright law allows for statutory damages of up to $150,000 per work in cases of willful copyright infringement. This could potentially result in significant financial liability for Anthropic.
In response to the ruling, an Anthropic spokesperson expressed satisfaction that the court recognized the transformative nature of their AI training. The company believes that their use of copyrighted material aligns with the purpose of copyright law in fostering creativity and scientific progress.
The lawsuit against Anthropic was filed by the authors last year, alleging that the company used pirated versions of their books without permission or compensation to train Claude to respond to human prompts. This case is part of a larger trend of authors, news outlets, and copyright owners taking legal action against tech companies over AI training practices.
The concept of fair use is a key legal defense for tech companies in these cases. The ruling by Judge Alsup is the first to address fair use in the context of generative AI. AI companies argue that their systems make fair use of copyrighted material to create new, transformative content, and that requiring payment to copyright holders could hinder innovation in the industry.
Despite the ruling in favor of fair use, concerns remain about the impact of AI companies downloading pirated copies of books to train their systems. Copyright owners argue that this practice poses a threat to their livelihoods by generating competing content.
In conclusion, the ruling in the Anthropic case highlights the complex legal and ethical issues surrounding AI training and copyright law. As the AI industry continues to evolve, it is crucial for stakeholders to find a balance between innovation and respecting intellectual property rights.