In a striking verdict that echoes through both tech corridors and music studios, a Munich court has ruled that when an AI system coughs up copyrighted song lyrics, the liability sits squarely with the machine’s maker—not the person who typed a simple prompt.
The case revolved around nine well-known German songs—melodies by the likes of Herbert Grönemeyer, Kristina Bach and Rolf Zuckowski—whose lyrics found their way from protected catalogues into the digital memory of a large language model. A music rights organisation challenged this quiet “memorisation,” arguing it was no different from copying and storing the works without permission.
The court agreed.
It held that the model’s internal storage of these lyrics—however abstract or numerical its methods—still amounted to reproduction. Training datasets, it noted, are not a vacuum; if an AI can essentially recite a work back to the world, then the work exists within it. That, the court said, crossed the boundary of the Text and Data Mining exception, which permits analysis—not retention of entire songs for later output.
And output was the second problem.
When the chatbot produced lyrics on command, even through simple prompts about the length of a song, the court deemed it a public communication of copyrighted content. The platform, not the user, was held responsible. Basic queries, it ruled, cannot magically shift liability from the operator to the end user.
The order was clear: stop the infringing activity, reveal the full extent of the copying, and pay damages of €4,620.70.
The judgment also noted that the company had long been aware of the risk of its models “regurgitating” copyrighted material, rejecting requests for a grace period and dismissing arguments that compliance would be disproportionate.
A final claim—regarding alleged distortion of authors’ personal rights—was not upheld, but the core ruling stands as a landmark:
If an AI sings someone else’s song, the law expects the maker to answer for the performance.




