Lawyers for the RIAA are aiming to shut down a popular Discord server centered on artificial intelligence and voice models, the latest effort by music companies to rein in the disruptive new technology.
In an action filed last week in D.C. federal court, attorneys for RIAA obtained a subpoena demanding that Discord reveal the identities of users on “AI Hub,” a message board with 145,000 members that calls itself “a community dedicated to making AI voices and songs.”
In a letter to Discord presenting the company with the subpoena, the RIAA said those users had “infringed … copyrighted sound recordings” and that the tech company was required to hand over names, physical addresses, payment info, IP addresses and other identifying details.
The group’s lawyers also sent Digital Millennium Copyright Act takedown notices to Discord, first in late May and then again next week. The group demanded that Discord disable access to the server, remove or disable the infringing material, and inform the server’s users “of the illegality of their conduct.”
“This server [is] dedicated to infringing our members’ copyrighted sound recordings by offering, selling, linking to, hosting, streaming, and/or distributing files containing our members’ sound recordings without authorization,” the RIAA’s lawyers wrote in their June letter to Discord, which was obtained by Billboard. “We are asking for your immediate assistance in stopping this unauthorized activity.”
The subpoena against Discord was obtained under the DMCA’s Section 512(h), which enables rights holders like the RIAA’s members to unmask the identities of anonymous online infringers in certain circumstances.
Discord can fight back by seeking to “quash” the subpoena; Twitter won such a challenge last year, when a federal judge ruled that the First Amendment rights of a user trumped the need for an unmasking order. It could also refuse to honor the takedown, but that would put the site itself at risk of litigation.
As of Thursday evening (June 22), the main AI Hub server remained up on Discord; it was unclear if individual content or sub-channels had been removed. A spokesperson for the company did not return a request for comment.
In a statement to Billboard, an RIAA spokesperson confirmed that the group had taken the action against AI Hub. “When those who seek to profit from AI train their systems on unauthorized content, it undermines the entire music ecosystem – harming creators, fans, and responsible developers alike. This action seeks to help ensure that lawless systems that exploit the life’s work of artists without consent cannot and do not become the future of AI.”
The RIAA’s actions are just the latest sign that the explosive growth of AI technologies over the past year has sparked serious concerns in the music industry.
One big fear is that copyrighted songs are being used en masse to “train” AI models, all without any compensation going to the songwriters or artists that created them. In April, Universal Music Group demanded that Spotify and other streaming services prevent AI companies from doing so on their platforms, warning that it “will not hesitate to take steps to protect our rights.”
Another fear is the proliferation of so-called deepfake versions of popular music, like the AI-generated fake Drake and The Weeknd track that went viral in April. That song was quickly pulled down, but its uncanny vocals and mass popularity sparked concerns about future celebrity rip offs.
For RIAA, AI Hub likely triggered both of those worries. The server features numerous “voice models” that mimic the voices of specific real singers, including Michael Jackson and Frank Sinatra. And in the wake of the RIAA’s actions, users on the Discord server speculated Thursday that the takedowns were filed because users had disclosed that some of the models had been trained on copyrighted songs.
“We have had certain threats from record labels to takedown models, mainly because some posters decided to share datasets full of copyrighted music publicly,” one AI Hub admin wrote. “If you want to avoid unnecessary takedowns[,] most importantly, do NOT share the full dataset if you have copyrighted material in the dataset. The voice model itself is fine, but don’t share the dataset.”