Andreessen Horowitz’s thoughts on AI shows what the music industry will be up against in 2024

Image credit: JD Lasica/Creative Commons
Andreessen Horowitz co-founder Marc Andreessen
MBW Reacts is a series of analytical commentaries from Music Business Worldwide written in response to major recent entertainment events or news stories. MBW Reacts is supported by JKBX, a technology platform that offers consumers access to music royalties as an asset class.

If the music industry gets its way on the issue of copyright and AI, it could cause the US to lose its dominance in artificial intelligence to China, posing a national security threat while holding back innovation that could improve everyone’s quality of life.

That’s one key takeaway from a submission to the US Copyright Office (USCO) from venture capital firm Andreessen Horowitz (a16z), a major investor in music technology, among other things, with USD $35 billion in assets under management.

The company takes an unequivocal stance on an issue that has fired up passions across the music industry – whether or not the unauthorized training of AI algorithms on copyrighted music amounts to infringement.

In their submissions to the USCO last year, both Universal Music Group (UMG) and the National Music Publishers Association (NMPA) argued that the scraping of vast amounts of copyrighted material far exceeds the limits of fair use.

“The wholesale appropriation of UMG’s enormous catalog of copyright-protected sound recordings and musical compositions to build multibillion commercial enterprises is anything but fair use,” UMG wrote in its submission.

“We can think of no precedent for finding this kind of wholesale, commercial taking that competes directly with the copyrighted works appropriated to be fair use.”

Andreessen Horowitz’s submission takes a completely different view, arguing that training AI on copyrighted materials is indeed fair use, and doesn’t amount to theft of intellectual property. It implicitly rejects the argument that AI “competes directly” with copyrighted works.

“AI models are not vast warehouses of copyrighted material, and any suggestion to this effect is a plain misunderstanding of the technology.”

Andreessen Horowitz

“The overwhelming majority of the time, the output of a generative AI service is not ‘substantially similar’ in the copyright sense to any particular copyrighted work that was used to train the model,” states the submission, filed October 30, which can be read in full here. “Even researchers employing sophisticated attacks on AI models have shown extremely small rates of memorization.”

It goes on to argue that “AI models are not vast warehouses of copyrighted material, and any suggestion to this effect is a plain misunderstanding of the technology…

“When an AI model is trained on copyrighted works, the purpose is not to store any of the potentially copyrightable content (that is, the protectable expression) of any work on which it is trained. Rather, training algorithms are designed to use training data to extract facts and statistical patterns across a broad body of examples of content – i.e., information that is not copyrightable.”

All the same, it does seem that in some instances AI-driven products are capable of replicating the material they were trained on — as UMG alleges in a recently-filed lawsuit against AI firm Anthropic, which asserts that Anthropic’s chatbot/content creator Claude can spit out copyright-protected song lyrics.

Here are some other noteworthy observations about Andreessen Horowitz’s submission…


Requiring licensing for AI training would unfairly advantage big tech firms over startups

The Andreessen Horowitz submission suggests that there may be no practical way to train large language models (LLMs) that are the basis of many AI products without ingesting copyrighted materials, and if those materials have to be licensed, it would give what it sees an unfair advantage to large corporations at the expense of “more agile” startups.

“Imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development.”

Andreessen Horowitz

“The only practical way generative AI models can exist is if they can be trained on an almost unimaginably massive amount of content, much of which (because of the ease with which copyright protection can be obtained) will be subject to copyright,” the submission states.

It continues: “Imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development. And, significantly, treating AI model training as an infringement of copyright would inure to the benefit of the largest tech companies – those with the deepest pockets and the greatest incentive to keep AI models closed off to competition.”


Limiting AI’s development could cause the US to lose its technological edge to China

The submission also makes an argument that falls far outside of the scope of what the music industry considers when looking at AI – that hobbling the technology’s development through copyright law could prove to be a national security risk for the United States.

“AI is not just being developed here in the United States; for example, it is also being developed in China, which views AI not as a tool for the betterment of humanity, but as a weapon for greater authoritarian control and influence,” the submission states.

“As China aggressively integrates AI into its military strategies, surveillance apparatus, and economic planning, ensuring US leadership in AI is increasingly about safeguarding our national security – we cannot afford to be outpaced in areas like cybersecurity, intelligence operations, and modern warfare, all of which are being transformed by this frontier technology.”

The submission continues: “There is a very real risk that the overzealous enforcement of copyright when it comes to AI training – or the ad hoc limitation of the fair use doctrine that properly protects AI training – could cost the United States the battle for global AI dominance…

“The result will be far less competition, far less innovation, and very likely the loss of the United States’ position as the leader in global AI development,” it concludes.

“There is a very real risk that the overzealous enforcement of copyright when it comes to AI training – or the ad hoc limitation of the fair use doctrine that properly protects AI training – could cost the United States the battle for global AI dominance.”

Andreessen Horowitz

That view places Andreessen Horowitz in the same camp as the tech industry, whose representatives have argued, both in their USCO submissions and in court, for AI developers to be given a free hand to use digital media as they see fit when training AI algorithms.

And it places the company squarely on the opposite side of the issue from much of the music industry – though Andreessen Horowitz has become a major investor in music startups and technology.

The company was one of the investors in the $50-million Series B funding round for artist distribution platform UnitedMasters; it’s one of the backers of AI-driven voice cloning and licensing platform Kits.ai; it led the Series B funding round for musician and producer Justin Blau (3Lau)’s blockchain-based music investment platform Royal; and it runs the Cultural Leadership Fund, created to address gaps in access and equity for Black communities in technology, and in which artists The Weeknd and Pharrell Williams are investors.


A ‘techno-optimistic’ view of artificial intelligence

Yet Andreessen Horowitz’s temperament is clearly closer to that of the tech industry than the traditional music industry.

The NMPA’s submission to the Copyright Office describes AI as potentially “the greatest risk to the human creative class that has ever existed,” despite the possibility of “great benefits… from generative AI systems that can assist human creators.”

In contrast, Andreessen Horowitz describes AI as possibly “the most important technology our civilization has ever created, at least equal to electricity and microchips.”

“AI will increase productivity throughout the economy, driving economic growth, creation of new industries, creation of new jobs, and wage growth, allowing the world to reach new heights of material prosperity.”

Andreessen Horowitz

It asserts that “AI will increase productivity throughout the economy, driving economic growth, creation of new industries, creation of new jobs, and wage growth, allowing the world to reach new heights of material prosperity.”

That’s a strikingly different view from that among many in the creative class, who see AI as a potential threat to their employment.

Andreessen Horowitz does not.

“The critical thing to understand about AI is that it is not a replacement of human intelligence but a profound augmentation of it. It has the potential to make everyone smarter and more capable,” it states in its submission.


Andreessen Horowitz is a firm believer in technological optimism, the idea that technology can solve humanity’s problems – even those caused by technology.

Just two weeks before it submitted its thoughts to the USCO, Andreessen Horowitz co-founder Marc Andreessen published the firm’s “Techno-Optimist Manifesto,” in which it makes the argument that not only is technology a good thing, it’s at the heart of all material progress.

“We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology,” the manifesto declares.

It goes on to extol the virtues of free markets – they are “an inherently individualistic way to achieve superior collective outcomes” – and asserts that AI is “our alchemy, our Philosopher’s Stone – we are literally making sand think.”

(That’s in reference to the fact that silicon, the key material in microchips, is often found in sand.}

“We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology.”

Marc Andreessen, ‘The Techno-Optimist Manifesto’

And it goes even further than that, asserting that AI has the potential to save lives.

“Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire,” the manifesto declares.

“We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”

Do you hear that, music industry?! You’re literally risking killing people… by insisting on copyright licensing for the training of generative AI.

From all this, we can conclude one thing: The fight over copyright and the development of AI – with 2024 as its key battleground – is going to be a rough one.


JKBX (pronounced "Jukebox") unlocks shared value from things people love by offering consumers access to music as an asset class — it calls them Royalty Shares. In short: JKBX makes it possible for you to invest in music the same way you invest in stocks and other securities.Music Business Worldwide

Related Posts