Research project
Countering Violent Extremist Content Online: A Multidisciplinary Approach
How and when should traditional content moderation tactics be integrated with online interventions?
- Duration
- 2025 - 2026
- Contact
- Graig Klein
- Funding
- Google Trust & Safety Research Award
Countering violent extremist content is not a new challenge for counterterrorism (CT) practitioners, but the breadth and diversity of online extremist content, including ‘lawful but awful’ materials that may start one on the path towards radicalisation, and advances in technology such as AI-generated content, creates new challenges. These problems require new research which can directly inform online security policies, processes, and procedures to directly improve online safety, and strengthen public-private partnerships. Our project focuses on countering online content, particularly exploitative AI, and engages with a tremendously important yet challenging opportunity – interdicting during individuals’ radicalisation processes. We propose a forward leaning approach to investigate, hypothesise, and test best-practices for moderating and countering extremists’ online content and exploitation of AI/generative AI. By doing so, we address a fundamental critique of terrorism studies – that it is reactive in nature, providing retrospective analyses and responses to new actors, events, and trends, rather than anticipating them. Our approach benefits academic research, policymakers and practitioners, industry, and lawmakers.
Generating evidence through collaboration
We explore the potential for divergent processes, interventions, and effectiveness in a targeted workshop bringing together thought, policy, and industry leaders and operationalising the resulting discussions in a series of survey experiments to generate data-driven evidence and metrics.