1. News
  2. TECH
  3. AI Music Revolution: Detection, Control, and Controversy

AI Music Revolution: Detection, Control, and Controversy

featured
Share

Share This Post

or copy the link

In 2023, the music industry faced a significant disruption that echoed the sound of Drake.

The viral track, “Heart on My Sleeve,” a convincingly crafted duet featuring the vocal stylings of Drake and The Weeknd, garnered millions of streams almost instantly, leaving many puzzled about its origins. This phenomenon not only captured widespread attention but also shattered the perception of control within the music landscape.

In response, the industry is witnessing the emergence of a new framework focused on making generative music traceable. Various detection systems are being integrated throughout the music ecosystem, encompassing the tools for training AI models, platforms for song uploads, rights licensing databases, and algorithms for content discovery. The aim is to proactively identify synthetic music, attach relevant metadata, and regulate its distribution.

Matt Adell, co-founder of Musical AI, highlighted the necessity of embedding these systems into the infrastructure. “The industry needs a scalable solution that addresses challenges from model training through distribution instead of just reacting to new content,” he stated.

The goal isn’t takedowns, but licensing and control

New startups are emerging with the goal of integrating detection capabilities into licensing processes. Established platforms, such as YouTube and Deezer, are creating internal systems to identify synthetic audio upon upload, thereby influencing how these tracks are presented in searches and recommendations. Various music companies, including Audible Magic, Pex, Rightsify, and SoundCloud, are also enhancing their detection, moderation, and attribution functionalities across the music production and distribution process.

This shift has led to the development of a diverse and rapidly expanding ecosystem that prioritizes the detection of AI-generated content as a fundamental infrastructure rather than simply an enforcement mechanism.

Instead of merely tracking AI-generated music post-release, some companies are innovating tools to identify it from the moment it is created. Vermillio and Musical AI are working on technologies that scan completed tracks for synthetic characteristics and automatically annotate them within their metadata.

Vermillio’s TraceID framework further advances the initiative by dissecting songs into stems—such as vocals, melodies, and lyrics—and signaling which specific parts were generated by AI. This allows rights holders to pinpoint any imitation at the stem level, even if a new release only incorporates fragments of the original work.

Vermillio emphasizes that their objective is not to remove content but to facilitate proactive licensing and verified releases. The TraceID system aims to replace existing mechanisms like YouTube’s Content ID, which can often overlook subtle imitations. They project that the potential for authenticated licensing through tools like TraceID could rise dramatically from $75 million in 2023 to $10 billion by 2025. This means that rights holders or platforms could use TraceID to evaluate finished tracks for protected elements and seek proper licensing before they are made public.

“We’re trying to quantify creative influence, not just catch copies.”

Some firms are exploring the origins of the training data with a view to understanding how generated tracks borrow from specific artists or songs. This analysis could pave the way for more accurate licensing agreements, aligning royalties with the genuine creative influences rather than leading to conflicts after a song’s release. Such discussions echo previous controversies, including the infamous “Blurred Lines” lawsuit, but now focus on algorithmic processes. With these advancements, licensing could occur prior to a track’s release rather than through post-production controversies.

Musical AI is also developing a detection mechanism that spans the entire process from content ingestion and generation to distribution, ensuring that the tracking of origins is comprehensive.

Cofounder Sean Power emphasized, “Attribution should originate when the model begins its learning process, not only when the song is completed. Our intention is to measure creative impact and not just replicate existing works.”

Deezer has also created internal tools to identify entirely AI-generated tracks upon upload. These tools help lower their visibility in both algorithmic and manual recommendations, especially when such content appears misleading or spammy. According to Chief Innovation Officer Aurélien Hérault, these systems identified approximately 20 percent of new uploads as fully AI-generated by April, more than doubling the figures from January. Identified tracks stay available on the platform but are not actively promoted, with plans for direct labeling for users anticipated in the coming weeks or months.

Hérault remarked, “We’re not opposed to AI. However, it’s crucial to address the fact that much of this generated content is being used deceitfully, prioritizing exploitation of the platform over creative intent, prompting us to focus our efforts here.”

The initiative surrounding AI’s DNTP (Do Not Train Protocol) seeks to enhance detection efforts right from the dataset stage, enabling artists and rights holders to mark their works as unavailable for model training. While visual artists have already utilized equivalent tools, the audio sector is still adapting. Consensus on how to uniformly manage consent, transparency, and licensing remains elusive. Regulatory frameworks may eventually necessitate such consensus, yet the current landscape is characterized by fragmentation. Furthermore, backing from leading AI training entities has been inconsistent, and critics argue that without independent governance and widespread adoption, the protocol could struggle to gain legitimacy.

“The opt-out protocol should be operated by a nonprofit organization and supervised by diverse stakeholders to ensure trust,” commented Dryhurst. “Loading the future of consent into a single, opaque corporate entity poses significant risk—such companies could fail or, worse, compromise fundamental rights.”

AI Music Revolution: Detection, Control, and Controversy
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!