A new report from UN Women, published in partnership with City St George's, University of London, and the digital forensics lab TheNerve, reveals that artificial intelligence is fueling a surge in online violence against women in public life. The study, titled Tipping Point: Online Violence Impacts, Manifestations and Redress in the AI Age, surveyed 640 female journalists, activists, and human rights defenders from 119 countries, including many from across the European Union, the United Kingdom, and the Balkans.
The findings are stark: 27% of respondents received unsolicited sexual advances or unwanted intimate images, while 12% had personal photos—including intimate ones—shared without their consent. A further 6% were subjected to deepfakes or manipulated imagery, often of a sexual nature. These attacks, the report notes, are “often deliberate and coordinated, aiming to silence women in public life while undermining their professional credibility and personal reputations.”
AI Tools Lower the Barrier for Abuse
Deepfake technology, which uses AI to superimpose a person's likeness onto fabricated images or videos, has become cheaper and faster, enabling perpetrators to produce nonconsensual material in minutes. The report coins the term “virtual rape” to describe this phenomenon. “AI-assisted 'virtual rape' is now at the fingertips of perpetrators,” said Julie Posetti, professor of journalism at City St George's and the report's lead author. “This violence serves to fuel the reversal of women's hard-won rights in a climate of rising authoritarianism, democratic backsliding and networked misogyny.”
The psychological toll is severe: one in four women reported anxiety or depression, and 13% were diagnosed with PTSD. More than 41% said they had self-censored on social media to avoid abuse, while 19% had pulled back from speaking out in a professional context. This chilling effect is particularly concerning for European democracies, where women's participation in politics, media, and civil society is essential for robust public debate.
Institutional Failures Across Europe
The report also highlights widespread failures in institutional responses. Only 25% of cases were reported to authorities, and of those, just 15% led to police taking legal action. A quarter of respondents who went to the police said they were made to feel victim-blamed, with questions like “What did you do to provoke the violence?”. An equal proportion said officers made them feel responsible for protecting themselves from further harm.
Pauline Renaud, lecturer in journalism at City St George's and co-author of the study, called for better training: “We need more effective education and training of law enforcement and judicial actors to support action in cases of technology-facilitated violence against women and girls. This needs to be matched by political will to effectively regulate Big Tech companies that use their outsized financial and political power to undermine progress in this area.”
The findings come as European policymakers grapple with the implications of AI regulation. The EU's AI Act, which aims to classify and restrict high-risk AI applications, is a step forward, but the report suggests enforcement remains weak. As Europe searches for alternative social media platforms beyond Big Tech, the need for robust safeguards against AI-generated abuse is urgent.
For women in public life across the continent—from Berlin to Bucharest, from Paris to Podgorica—the report is a wake-up call. Without stronger legal frameworks and corporate accountability, the promise of digital participation risks becoming a tool for silencing the very voices democracies need most.


