Believe has developed software that detects AI-made music with 98% accuracy. What might this mean for the future?

Believe CEO Denis Ladegaillerie
MBW Explains is a series of analytical features in which we explore the context behind major music industry talking points – and suggest what might happen next. MBW Explains is supported by JKBX, a technology platform that offers consumers access to music royalties as an asset class.
What’s happened?

During Believe’s Q1 earnings call this past April, CEO Denis Ladegaillerie stated plainly: “We do not wish to distribute and we are not to distribute any content that is 100% created by AI, whether that is Believe or at [Believe subsidiary] TuneCore.”

Ladegaillerie then declared that Paris-headquartered Believe was just six months or less away from launching tech that could detect whether a track had been entirely made by AI.

“You have technologies out there in the market today that can detect an AI-generated track with 99.9% accuracy, versus a human-created track,” Ladegaillerie said, adding that such tech “needs to be deployed everywhere”.

Fast-forward to two quarters later, and Believe has confirmed the launch of a proprietary “AI detection algorithm”, via a tool it has dubbed AI Radar.

So far, at least, AI Radar doesn’t quite have the 99.9% success rate that Ladegaillerie is aiming for. According to Believe, it can detect AI-made master recordings with a 98% success rate, and “deep fakes” with about 93% accuracy – what the company describes as an “excellent detection rate.”

The technology is “in production right now and functioning,” Ladegaillerie said on the company’s Q3 earnings call on October 24.

“This was developed internally by our own teams that have expertise in AI and audio recognition,” Ladegaillerie said, adding that the technology has enabled Believe “to provide control of the content and protection to our artists as generative AI becomes more widely adopted.”


What’s the wider context?

While many in the industry see great potential in AI technology (both as a way to drive revenue and as a potentially easier and more efficient way of creating music) much of the attention in the music world has focused on troubling aspects of the technology – namely its ability to rip off copyrighted material and generate music that mimics real-life artists’ vocals.

In just the most recent instance, Latin trap music phenomenon Bad Bunny took to social media to criticize a “shitty” viral track that had mimicked his vocals.

“If you like that shitty song that’s viral on TikTok, get out of this group right now. You don’t deserve to be my friends and that’s why I made the new album, to get rid of people like that,” he wrote on a private WhatsApp channel, in a Spanish-language post that was translated by Business Insider.

The track, titled NostalgIA (with the “IA” being a reference to the Spanish acronym for AI), mimics the vocals of Justin Bieber as well as Bad Bunny, and was uploaded by a user named FlowGPT.

NostalgIA’s viral success – it has more than 20 million plays on TikTok – and the fury with which Bad Bunny responded to it echoes a similar incident involving rappers Drake and The Weeknd earlier this year, who were mimicked on a track called Heart On My Sleeve.

Drake called that track “the final straw” after it went viral – and before platforms began to pull it down. (NostalgIA appears to be live still on TikTok, though it no longer appears to be available on Spotify.)

Incidents like this have triggered an aggressive response from the music business, as can be seen second-hand through the speed with which platforms like YouTube and Spotify have been taking down songs identified as AI fakes.

Earlier this year, Universal Music Group (UMG) asked streaming platforms including Spotify and Apple Music to block access to their content for any AI platform looking to “scrape” their copyrighted music in order to train AI algorithms.

“We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to DSPs in March, as quoted by the Financial Times.

“This was developed internally by our own teams that have expertise in AI and audio recognition. This tool is now able to detect master recordings with a 98% detection rate and deepfakes with a 93% detection rate, allowing us to provide control of content and protection to our artists as gen-AI becomes more widely adopted.”

Denis Ladegaillerie, Believe

DSPs seem to be taking that warning seriously. In one sign of a crackdown on AI deep fakes, Spotify earlier this year temporarily blocked content from AI music-making and distribution platform Boomy over concerns about potential artificial streaming of Boomy-made tracks.

The dispute over AI companies using copyrighted music to train their technology – presumably the reason why generative AI can fake a Bad Bunny or Drake vocal line – has now spilled into the courts.

In September, Universal Music Group, the world’s largest music rights holder, joined with Concord Music Group and ABKCO to sue San Francisco-headquartered AI company Anthropic, alleging “systematic and widespread infringement of their copyrighted song lyrics”.

Other rightsholders outside the music business have also taken AI companies to court; for instance, two separate groups of writers are suing OpenAI over its alleged scraping of their books to train AI algorithms.


So why is Believe doing this?

Pressure from major rightsholders could be one reason for Believe’s new AI music detector.

Believe owns TuneCore, one of the most prominent DIY music distributors, and likely doesn’t want to find itself in Boomy’s position – drummed out of streaming services over suspicions of its users deploying fraudulent AI-related shenanigans.

By positioning itself as playing a leading “responsible” role in the development of AI in music, Believe/TuneCore may be able to pre-empt being cast as a conduit for infringing content.

If so, it’s not an unreasonable concern; rightsholders have grown noticeably alarmed over the flood of new music content hitting the streaming services. Thanks to AI and other digital music creation tools, there are now around 120,000 new tracks being uploaded to streaming services every day – many of them of poor-quality clickbait designed to do little more than earn a dollar or two from the pro-rata payment system employed by most music streaming services.

Hence the push among major recording companies towards an “artist-centric” payment model – like the one Deezer has adopted, and which UMG and Warner Music Group (WMG) have signed up for, and the one that Spotify now appears to be shifting towards.

Notably, UMG Chairman and CEO Sir Lucian Grainge recently took a rhetorical hammer to critics of the artist-centric model, asserting that those who would criticize the shift are “merchants of garbage.”

Among those critics is Believe’s own CEO, Denis Ladegaillerie, who has suggested that the ‘artist-centric’ model discourages new artists by not paying royalties for tracks below a certain threshold of streams.


What happens next?

Even as the battle against unlicensed AI-generated content gets underway, signs are emerging that the music industry would much rather make money off AI than be endlessly fighting it in the courts and the marketplace.

To tip that balance (towards money-making), music rightsholders will first have to get a handle on unauthorized AI-generated music being created.

During Believe’s April earnings call, Ladegaillerie said his company was “doing some tests” with large AI players to develop tech that can identify underlying copyrighted material used in an AI-generated ripoff.

Ladegaillerie likened this theoretical technology to YouTube’s Content ID system, which the Google-owned video service developed a decade and a half ago to identify copyrighted tracks that were uploaded to their platform without permission.

Content ID allows rightsholders the option to take down user-generated videos or monetize them via ads. The result of its launch: the adversarial relationship between YouTube uploaders and music rightsholders suddenly became comparatively symbiotic.

“We made a very important decision, which was to… build a fingerprinting software that allowed us to track the copyright on our platform, and then have commercial relationship[s] with copyright holders to send them the money,” Warner Music Group CEO Robert Kyncl – formerly YouTube’s Chief Business Officer – told the audience at the Code Conference in September.

Like Ladegaillerie, Kyncl sees the potential for “an incredible new revenue stream” coming out of AI-generated music, if it can be harnessed to work for rights holders. He, too, likens this system to YouTube’s Content ID.

“AI is that with new super tools,” he said.

“[at youtube] We made a very important decision, which was to… build a fingerprinting software that allowed us to track the copyright on our platform, and then have commercial relationship[s] with copyright holders to send them the money.”

Robert Kyncl, Warner Music Group

Importantly, Kyncl also stresses the need for the artist’s consent when it comes to AI-generated mimicry. “We need to approach [AI] with the same thoughtfulness [as YouTube did with ContentID], and we have to make sure that artists have a choice,” Kyncl said.

In the meantime, however, enforcement of copyright may be the only way the industry can grapple with the proliferation of AI piracy.

Other music companies are teaming up with tech companies to both stem the proliferation of AI music, and harness AI for their artists. This past August, UMG announced just such a deal with YouTube, which will involve developing AI tools that will both “include appropriate protections and unlock opportunities for music partners”.

UMG and WMG have both reportedly engaged in talks with Google to develop licensing for artists’ vocal and melodies for AI-generated music.

If this is the future of music – and it looks increasingly likely – then every major streamer will need some method to detect AI-generated music, so that it can be monetized by the rights holders. Expect other streaming services outside of YouTube to develop and announce their own versions of AI detectors in future.


A final thought…

If Believe were so inclined, it could get into the good graces of major music rights holders by sharing its AI Radar technology with other streaming services.

No doubt some DSPs – namely those that are owned by large tech companies, like Apple Music and YouTube – wouldn’t be interested, as they are likely to inherit their own AI detectors from their parent companies.

Yet other DSPs may not have deep enough pockets, or the depth of technical expertise, needed to develop such a tool.

Handing out AI Radar – or, at the very least, licensing it at a relatively low price – to DSPs could be a way for Believe to burnish its credentials as a responsible streaming service.

This idea isn’t without precedent. In 1959, after Volvo engineer Nils Bohlin developed the three-point seatbelt – the same type used to this day – the Swedish automaker freely gave away the design for other carmakers to use. The move helped to save hundreds of thousands of lives.

Believe is now in something of a similar position.

Though it’s hard to argue that sharing AI Radar with DSPs would save lives like Volvo’s move did, it could certainly save the music industry a lot of grief.

JKBX (pronounced "Jukebox") unlocks shared value from things people love by offering consumers access to music as an asset class — it calls them Royalty Shares. In short: JKBX makes it possible for you to invest in music the same way you invest in stocks and other securities.Music Business Worldwide

Related Posts