

'Fueling sexism': AI 'bikini interview' videos flood internet
The videos are strikingly lifelike, featuring bikini-clad women conducting street interviews and eliciting lewd comments -- but they are entirely fake, generated by AI tools increasingly used to flood social media with sexist content.
Such AI slop -- mass-produced content created by cheap artificial intelligence tools that turn simple text prompts into hyper-realistic visuals -- is frequently drowning out authentic posts and blurring the line between fiction and reality.
The trend has spawned a cottage industry of AI influencers churning out large volumes of sexualized clips with minimal effort, often driven by platform incentive programs that financially reward viral content.
Hordes of AI clips, laden with locker-room humor, purport to show scantily clad female interviewers on the streets of India or the United Kingdom -- sparking concern about the harm such synthetic content may pose to women.
AFP's fact-checkers traced hundreds of such videos on Instagram, many in Hindi, that purportedly show male interviewees casually delivering misogynistic punchlines and sexualized remarks -- sometimes even grabbing the women -- while crowds of men gawk or laugh in the background.
Many videos racked up tens of millions of views -- and some further monetized that traction by promoting an adult chat app to "make new female friends."
The fabricated clips were so lifelike that some users in the comments questioned whether the featured women were real.
A sample of these videos analyzed by the US cybersecurity firm GetReal Security showed they were created using Google's Veo 3 AI generator, known for hyper-realistic visuals.
- 'Gendered harm' -
"Misogyny that usually stayed hidden in locker room chats and groups is now being dressed up as AI visuals," Nirali Bhatia, an India-based cyber psychologist, told AFP.
"This is part of AI-mediated gendered harm," she said, adding that the trend was "fueling sexism."
The trend offers a window into an internet landscape now increasingly swamped with AI-generated memes, videos and images that are competing for attention with -- and increasingly eclipsing -- authentic content.
"AI slop and any type of unlabeled AI-generated content slowly chips away at the little trust that remains in visual content," GetReal Security's Emmanuelle Saliba told AFP.
The most viral misogynistic content often relies on shock value -- including Instagram and TikTok clips that Wired magazine said were generated using Veo 3 and portray Black women as big-footed primates.
Videos on one popular TikTok account mockingly list what so-called gold-digging "girls gone wild" would do for money.
Women are also fodder for distressing AI-driven clickbait, with AFP's fact-checkers tracking viral videos of a fake marine trainer named "Jessica Radcliffe" being fatally attacked by an orca during a live show at a water park.
The fabricated footage rapidly spread across platforms including TikTok, Facebook and X, sparking global outrage from users who believed the woman was real.
- 'Unreal' -
Last year, Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, found 900 Instagram accounts of likely AI-generated "models" -- predominantly female and typically scantily clothed.
These thirst traps cumulatively amassed 13 million followers and posted more than 200,000 images, typically monetizing their reach by redirecting their audiences to commercial content-sharing platforms.
With AI fakery proliferating online, "the numbers now are undoubtedly much larger," Mantzarlis told AFP.
"Expect more nonsense content leveraging body standards that are not just unrealistic but literally unreal," he added.
Financially incentivized slop is becoming increasingly challenging to police as content creators -- including students and stay-at-home parents around the world -- turn to AI video production as gig work.
Many creators on YouTube and TikTok offer paid courses on how to monetize viral AI-generated material on platforms, many of which have reduced their reliance on human fact-checkers and scaled back content moderation.
Some platforms have sought to crack down on accounts promoting slop, with YouTube recently saying that creators of "inauthentic" and "mass produced" content would be ineligible for monetization.
"AI doesn't invent misogyny -- it just reflects and amplifies what's already there," AI consultant Divyendra Jadoun told AFP.
"If audiences reward this kind of content with millions of likes, the algorithms and AI creators will keep producing it. The bigger fight isn't just technological -- it's social and cultural."
burs-ac/st
D.Laporte--PP