100,000 false positives for every real terrorist: Why anti-terror algorithms don't work

Timme Bisgaard Munk


Can terrorist attacks be predicted and prevented using classification algorithms? Can predictive analytics see the hidden patterns and data tracks in the planning of terrorist acts? According to a number of IT firms that now offer programs to predict terrorism using predictive analytics, the answer is yes. According to scientific and application-oriented literature, however, these programs raise a number of practical, statistical and recursive problems. In a literature review and discussion, this paper examines specific problems involved in predicting terrorism. The problems include the opportunity cost of false positives/false negatives, the statistical quality of the prediction and the self-reinforcing, corrupting recursive effects of predictive analytics, since the method lacks an inner meta-model for its own learning- and pattern-dependent adaptation. The conclusion is algorithms don’t work for detecting terrorism and is ineffective, risky and inappropriate, with potentially 100,000 false positives for every real terrorist that the algorithm finds.


Predictive analytics; counter-terrorism, Big data; Classification algorithms; terrorism

Full Text:


DOI: https://doi.org/10.5210/fm.v22i9.7126

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.