Artificial intelligence is quietly reshaping the influencer landscape—and not always in ethical ways. Wellness content creator Arielle Lorre was shocked to see a video of herself endorsing a skincare product she had never used or even heard of. In a now-viral TikTok, Lorre revealed that the video—posted by skincare brand Skaind—used an AI-generated version of her likeness in a fabricated interview setting.
“They used an AI version of me that looks and sounds like me,” Lorre explained. “I never said those things, never tried the product, and never sat for that interview.” Skaind did not respond to requests for comment, but the video, used in a paid Instagram ad, raises serious concerns about consent, brand trust, and the integrity of digital marketing.
And Lorre is far from alone. Deepfake advertising has long targeted high-profile figures like podcaster Joe Rogan, but advancements in generative AI have now made it easier—and cheaper—to replicate anyone’s face and voice. Megan Duong, CEO of social-listening startup Plot, warns that AI tools are now being widely used in affiliate marketing schemes, where anyone can make content to promote a product—sometimes using a recognizable creator’s likeness without consent.
“It’s not always about deceiving the brand or creator,” Duong says. “The focus is just on getting viewers to watch long enough to convert a sale. The ethical implications often get ignored.”
What makes the problem worse is how difficult it can be for creators to even detect the misuse. Lorre found out about the ad only after followers tagged her in the comments. But Duong points out that smaller influencers may never be aware their image is being used at all—especially as affiliate marketers and platforms like Whop allow brands to repost popular user-generated content, including AI-made clips.
In Lorre’s case, Skaind reportedly told her via DM that their marketing team had sourced the video through an AI platform without realizing it featured a real person with rights to their image. That explanation offers little comfort in an era where deepfakes are proliferating faster than platforms can respond.
Kyle Dulay, co-founder of Collabstr, a platform that connects brands with content creators, believes social media platforms will soon be forced to step in. “If the platforms don’t act, they risk being flooded with low-quality, AI-generated junk,” he says. Dulay envisions a future where creators can license their likeness for AI use—turning a major risk into a potential revenue stream.
Still, without better safeguards, the growing use of AI in influencer marketing could erode public trust and put both creators and brands at risk. Until clear regulations are in place, creators are left to police their own digital doubles—and hope their followers can tell the difference.