Now appearing beside artist profiles, Spotify’s new “Verified by Spotify” badge uses a green checkmark to highlight real human creators. Only accounts meeting the platform’s internal authenticity checks receive the label. Rather than algorithm-built personas, these profiles represent actual musicians behind the music. The rollout is happening gradually, changing how artists appear in searches, playlists, and recommendations.
The update arrives as concerns continue growing around AI-generated music flooding streaming services. Spotify says verification depends on signals such as active social media accounts, consistent listener activity, merchandise listings, and live performance schedules – indicators suggesting a genuine person is tied to the profile.
According to the company, these measures are designed to separate human creators from automated content increasingly appearing online. Spotify says most artists users actively search for will eventually receive verification. Artists recognized for meaningful contributions to music culture are expected to be prioritized ahead of bulk-uploaded or mass-generated accounts.
Over the coming weeks, the checkmarks will gradually appear across the platform, with influence and authenticity carrying more weight than upload volume.
The move comes as streaming platforms face mounting criticism over how they handle AI-generated tracks. While the badge confirms a profile belongs to a real person, some critics quickly pointed out that it does not indicate whether artificial intelligence was used to help create the music itself.
Questions around what counts as “real” music continue growing as AI tools become more involved in production. Creator-rights advocate and former AI executive Ed Newton-Rex warned that systems like Spotify’s may unintentionally disadvantage independent musicians who do not tour, sell merchandise, or maintain strong social media visibility.
Instead, he suggested platforms should directly label AI-generated songs rather than relying solely on artist verification.
Experts also note that defining AI involvement in music is increasingly difficult. Professor Nick Collins from Durham University described AI-assisted music creation as a broad spectrum rather than a simple divide between human-made and machine-made work. Many songs now involve software-assisted mixing, mastering, composition, or editing, making it far harder to classify music by origin alone.
Spotify has faced years of criticism over AI-generated audio. Across forums and online communities, users have repeatedly called for clearer labels showing whether tracks were created by humans or algorithms. Some developers have even built independent tools aimed at detecting and filtering AI-generated songs on the platform.
Concerns intensified after projects like The Velvet Sundown attracted large audiences despite having no interviews, live performances, or publicly traceable history.
The group later described itself as a “synthetic music project” supported by artificial intelligence, fueling debate around transparency in digital music spaces.
Spotify’s latest verification effort appears aimed at rebuilding trust while balancing support for evolving AI technologies. The move also reflects a broader trend across digital platforms, where companies are introducing verification systems to distinguish human-created content from synthetic material as AI-generated media becomes harder to identify.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article:
Related