The disturbing rise of hyper-realistic, AI-generated deepfake videos represents a profound societal challenge, with a particularly insidious focus: the creation of non-consensual pornography targeting women and girls. What began on forums like Reddit in 2017, initially exploiting the vast online image libraries of female celebrities, has now become a dangerously accessible tool for harassment, extortion, and reputational sabotage against women from all walks of life.
"This is violating. This is dehumanising," says Noelle Martin, an activist and researcher at the University of Western Australia who has studied image-based abuse for a decade. "The reality is that we know this could impact a person's employability. This could impact a person's interpersonal relationships and mental health." A 2019 study by AI firm Deeptrace estimated that pornography constituted a staggering 96% of all deepfake videos online, the vast majority created without the subject's consent.
A Crisis of Consent and Scale
The technical barrier to creating these forgeries has collapsed. Now, just a handful of publicly available photos can be used to graft a person's likeness into explicit material. While some applications are benign, like social media filters, the malicious uses are proliferating. Recent cases have seen girls as young as 11 targeted, with fabricated images circulated among their schoolmates.
Henry Ajder, a leading expert on generative AI, notes a dangerous disconnect in online communities where this content is made. "One of the most disturbing trends I see... is that they think it's a joke or they don't think it's serious because the results aren't hyperrealistic, not understanding that for victims, this is still really, really painful and traumatic."
The abuse often extends beyond personal violation into professional sabotage. Activist Kate Isaacs and Indian journalist Rana Ayyub have both been subjects of deepfake smear campaigns designed to discredit their work. The psychological toll is immense. "It's horrifying and shocking to see yourself depicted in a way that you didn't consent to," Martin adds.
The Legal Lag Across Europe and Beyond
Prosecuting these crimes remains exceptionally difficult. As Ajder warns, "the individual and the naked eye... is just not going to be a reliable marker for spotting fakes." Lawmakers globally are scrambling to catch up with the technology. The United Kingdom's Ministry of Justice has stated that sharing deepfakes without consent could lead to imprisonment, while in the United States, states including California and Virginia have enacted specific criminal statutes.
In the European Union, the existing Digital Services Act (DSA) does not directly address non-consensual deepfakes. However, the newly negotiated EU AI Act is poised to establish a more robust legal framework, classifying certain high-risk AI applications and imposing strict obligations on developers. This forms part of a broader European data dilemma, balancing innovation with fundamental rights to privacy and dignity.
The FBI has reported an increase in sextortion cases where deepfakes are used to blackmail victims, including minors and non-consenting adults, using images sourced from social media or video chats. This mirrors the malicious use of synthetic media in other contexts, such as the AI-generated soldier deepfakes deployed to undermine Ukrainian morale.
For victims, the path to justice is fraught. "There has to be some sort of global response from government, law enforcement, from people on the ground, victim communities," argues Martin. "So there is accountability for people where they can't just ruin someone's life and get away with it and face no repercussions." The fight against this digital abuse intersects with wider struggles for accountability and human rights, reminiscent of the systemic issues documented in reports like those on the abuse in Belarusian prisons.
As the technology evolves, the urgent need for effective detection tools, comprehensive legal recourse, and a cultural shift in understanding the profound harm of non-consensual synthetic media becomes ever clearer. The challenge for Europe and its partners is to protect individuals from this new form of violation without stifling the positive potential of generative AI.


