Taylor Swift applies for voice trademark to protect herself: Copyright law can't regulate AI voice imitation, trademark battles may fill the gap

Taylor Swift’s licensing management company has filed three trademark applications with the United States Patent and Trademark Office, aiming to protect her voice and stage image. Currently, copyright law has a structural gap in the face of AI voice imitation, and trademark law may be the closest effective tool to fill that gap.
(Background: How will gambling and prediction markets destroy the world? The darker parts are still ahead.)
(Additional context: Trump unveils a “National AI Legislative Framework”! Pushing for a single federal regulation, with a tough stance to protect U.S. AI dominance.)

Table of Contents

Toggle

  • The structural gap in copyright law
  • Legal question: Does this audio clip qualify as a trademark?
  • The gap between legislative emptiness and platform tools

Copyright law protects songs, but not the voice itself—this legal loophole means AI voice imitation currently has no clear basis to be pursued. Taylor Swift’s team chose a different path last week: filing three trademark applications with the United States Patent and Trademark Office, turning her voice and stage image into legal weapons that can claim “confusingly similar” rights.

The applications were submitted by her licensing management company, TAS Rights Management, totaling three: two sound trademarks—audio clips of her saying “Hey, it’s Taylor Swift” and “Hey, it’s Taylor”—and one image trademark, covering a photo of her on stage holding a pink guitar and wearing a colorful rainbow bodysuit.

The audio clips come from promotional activities for her new album, The Life of a Showgirl, released on Amazon Music. Application numbers sn99784980, sn99784979, and image application sn99784977 have all been published and can be checked. Swift’s team has not directly explained whether the trademarks target AI, but legal interpretations are nearly unanimous: this is preparation for the AI era.

The structural gap in copyright law

Copyright protects creative works: song melodies, lyrics, and master recordings, but not “a person’s voice speaking.” This means AI tools can bypass copyright law as long as they mimic voiceprints rather than directly copying recordings.

In 2024, Universal Music Group (UMG) issued a DMCA takedown notice against an AI-generated Drake-style song, but in the end it could only rely on the producer Metro Boomin’s copyright claim via his production credit, not Drake’s voice itself. This isn’t a one-off case—it’s a blind spot that the copyright system, when designed, failed to anticipate.

IP lawyer Josh Gerben’s analysis points out that the trademark standard is “products that are confusingly similar,” not the copyright standard of “substantially similar copying”: this threshold is lower, and therefore more suitable for dealing with AI voice imitation. In other words, even if an AI-generated voice is not copied word for word, if it makes consumers confused, it could still constitute trademark infringement.

There is precedent already: actor Matthew McConaughey obtained 8 trademarks approved by the USPTO in December 2025, including his signature line “Alright, alright, alright” and video clips—which the industry views as a direct example of celebrities using trademark law to fight AI impersonation.

Legal question: Does this audio clip qualify as a trademark?

Of course, doubts about the voice immediately followed. Alexandra Roberts, a law professor at Northeastern University, pointed to a core issue: the audio clip in Swift’s application is a promotional message for Amazon Music, not an independent identifier used on its own.

Traditional standards for sound trademarks—such as NBC’s three-tone chime or the MGM film studio logo’s lion roar—mark commercial source in a single, standalone way, rather than being attached to a specific promotional context. If the USPTO determines that this recording does not meet the “identifier” requirement, it could issue a preliminary refusal.

However, at that time, Swift’s team can still supplement sample materials that better meet the requirements; the process is not over.

UCLA law professor Xiyin Tang’s observation points out another dimension: the primary function of these trademarks may not be to win lawsuits, but to deter. Once registered, potential infringers must assess legal risk before deciding to produce or distribute AI voice imitation content.

Even if the trademarks are ultimately challenged in litigation, this upfront deterrent effect has already taken place.

The gap between legislative emptiness and platform tools

Swift’s strategy is to use legal patches, and behind it is a bigger legislative void.

In January 2024, her AI-generated deepfake porn went viral on the X platform, with the most viewed single post exceeding 45,000,000 views. In August of the same year, Trump published AI-generated images on Truth Social impersonating “Swifties for Trump,” and Swift publicly said that AI deepfakes made her feel afraid.

At the federal level, the NO FAKES Act, which aims to establish federal protections for portrait and voice rights, is still under review in Congress. The SAG-AFTRA actors’ union, Universal Music, Warner, and OpenAI have all stated support. But whether the bill will pass and when it will pass remain unknown.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments