Taylor Swift is latest celeb to get deep faked

This post was originally published on this site

If you saw an ad on Facebook or another platform recently that features Taylor Swift extolling the virtues of Le Creuset cookware and kitchen gear, you were had.

Yes, Swift is a fan of the brand. But no, that’s not her voice, realistic though it may seem.

Tay-Tay is the latest celebrity to have his or her voice or image deep faked with AI technology to dupe you into thinking the celeb endorses a product or organization.

Tom Hanks, Gayle King and YouTube personality MrBeast have similarly been deep faked with bogus AI versions of themselves.

As the New York Times reports, someone coupled a synthetic version of Swift’s voice with old footage of her alongside Le Creuset Dutch ovens.

“In several ads,” the Times reported, “Ms. Swift’s cloned voice addressed ‘Swifties’ — her fans — and said she was ‘thrilled’ to be handing out free cookware sets. All people had to do was click on a button and answer a few questions before the end of the day.”

Le Creuset says it played no role in the deep-fake ads and urged shoppers to be wary of clicking on suspicious marketing.

This is the latest example of how AI can be used by scammers to trick people into thinking an offering is legitimate.

After all, if Swift supports it, it must be good.

In the past, such ploys were often transparent to anyone who took the time to wonder why an A-list celebrity would be stooping to product pitches.

But AI has grown so sophisticated so quickly that it’s harder than ever to tell what’s bogus and what isn’t.

That’s why it’s vital that lawmakers step up, fast, with regulations for this technology.

As it stands, there are currently no federal laws addressing AI scams. Two bills — the Deepfakes Accountability Act in the House and the No Fakes Act in the Senate — are currently pending.

But they focus more on disclosure or seeking permission to use a celebrity’s voice or likeness.

That’s a start. But scammers aren’t known for following rules or seeking permission to fool people.

To nip this problem in the bud, online platforms need to be held accountable for the content they run, including ads.

It’s not that difficult. Any sensible Facebook employee should have wondered if someone of Swift’s stature would really be selling utensils or whatever. If the veracity of the ad can’t be confirmed, it shouldn’t be posted.

Period.

Digital technology continues to be light years ahead of existing laws and regulations. We need to catch up, and we need to do it now.

And social media need to be held accountable for ensuring users aren’t being taken to the cleaners (or the kitchen) by deep fakes.