When a young Evan Bogart tried his hand at writing a few pop songs for a girl group he managed, he had no idea he would score one of the biggest Billboard hits of 2006.

After the act disbanded, Bogart decided to pitch the songs to labels. One of them landed with a then-fledgling pop artist named Rihanna, who was signed to Def Jam Recordings. Bogart’s song, “S.O.S.,” not only broke Rihanna — it jumped 33 spots to No. 1 on the Billboard Hot 100 in a single week — it minted his songwriting career.

Multiple hits later, Bogart runs his own publishing company and label, Seeker Music, where he encourages his songwriters to create “pitch records” — demos sung by a hired vocalist or the writer — that are shopped to artists. It’s a common practice that increasingly employs a new — albeit controversial — hack: artificial intelligence voice synthesis, which mimics the voice of the artist being pitched.

Bogart says the technology helps his roster better tailor pitches to talent and enables the artists to envision themselves on the track. At a time when acts are demanding a weightier role in the song creation process, AI voice generation offers a creative way to get their attention.

“Producers and writers have always tried to mimic the artists’ voice on these demos anyway,” says attorney Jason Berger, whose producer and songwriter clients are beginning to experiment with AI vocals for their pitches. “I feel like this technology is very impactful because now you can skip that step with AI.”

Traditionally, songwriters will either sing through the track themselves for a demo recording or employ a demo singer. In cases when writers have a specific artist in mind, a soundalike demo singer may be employed to mimic the artist’s voice for about $250-500 per cut. (One songwriter manager said there are a few in particular who make good money imitating Maroon 5s Adam Levine, Justin Bieber, and other top tier acts. In general, however, nearly all demo singers hold other jobs in music like background singing, writing, producing or engineering.)

The emerging technology doesn’t generate a melody and vocal from scratch but instead maps the AI-generated tone of the artist’s voice atop a prerecorded vocal. Popular platforms include CoversAI, Uberduck, KitsAI, and Grimesown voice model, which she made available for public use in May. Still, these models yield mixed results.

Some artists’ voices might be easier for AI to imitate because they employ Auto-Tune or other voice-processing technology when they record, normalizing the voice and giving it an already computerized feel. A large catalog of recordings also helps because it offers more training material.

“Certain voices sound really good, but others are not so good,” he says, but he adds that he actually “likes that it sounds a little different from a real voice. I’m not trying to pretend the artist is truly on the song. I’m just sending people a robotic version of the artist to help them hear if the song is a good fit.”

Training is one of the most contentious areas of generative AI because the algorithms are often fed copyrighted material, like sound recordings, without owners’ knowledge or compensation. The legality of this is still being determined in the United States and other countries, but any restrictions that arise probably won’t apply to pitch records because they aren’t released commercially.

“I really haven’t had any negative reactions,” Bogart says of his efforts. “No one’s said ‘did you just pitch your song with my artists’ voice on it to me?’”

Stefán Heinrich, founder and CEO of CoversAI creator mayk.It, says voice re-creation tools could even democratize the songwriting profession altogether, allowing talented unknown writers a chance at getting noticed. “Until now, you had to have the right connections to pitch your songs to artists,” he says. “Now an unknown songwriter can use the power of the technology and the reach of TikTok to show your skills to others and get invited into those rooms.”

While Nick Jarjour — founder/CEO of JajourCo, advisor to mayk.it and former global head of song management at Hipgnosis — supports the ethical use of this technology, he believes that the industry should take a different approach to applying AI voices on pitches. “The solution is letting the artist who is receiving the demos decide to put their AI voice onto it themselves,” he says, as opposed to publishers and writers sending over demos with the AI treatment already provided. To do this, artists can create their own personal voice models that are more accurate and tailored to their needs, much like Grimes has already done, and then apply those to pitches they receive.

Still, as Berger says, “this is evolving by the day.” Most publishers haven’t put this technology into every day practice yet, but now more are discussing the idea publicly. At the Association of Independent Music Publishers (AIMP) annual conference in New York City last month, Katie Fagan, head of A&R for Prescription Songs Nashville, said that she recently saw AI vocals on a pitch record for the first time. One of her writers had tested AI to add the voice of Cardi B to the demo. “It could be an interesting pitch tool in the future,” she said, noting that this technology could be used even more simply to change the gender of the demo singer when pitching the same demo to a mix of male and female artists.

“I really don’t see why you wouldn’t pitch a song with a voice that sounds as close as possible to the artist, given the goal is helping the artist hear themselves on the track,” says Berger. “My guess is that people will get used to this pretty quick. I think in six months we are going to have even more to talk about.”

In the more distant future, Bogart wonders what might happen if, as the technology advances, pitch records become the final step in the creative process. “What would be really scary is if someone asks the artist, ‘Hey, do you want to cut this?’ And they reply, ‘I don’t have to, that’s me.’”

Source link

Write A Comment