BMFTR-Project DeMasKI
Project Description
Digital disinformation increasingly supplants rational engagement with alarmist, emotionalizing rhetoric, thereby contributing to the polarization and destabilization of democratic discourse. The consortium tackles this problem head?on: instead of merely detecting buzzwords, it investigates which linguistic structures systematically displace argumentative justification and render weak claims seemingly self?evident. To this end, two text corpora are assembled and iteratively analyzed in feedback loops: a historical corpus of ancient rhetoric and argumentation?theoretical texts (including Plato, the Sophists, Isocrates) that explicitly discuss manipulative discourse, and a contemporary corpus of populist communication formats (social media, podcasts, campaign speeches, press conferences) in German and English, with a prospective expansion to additional European languages. From the comparison, robust structural features of anti?discursive emotionalisation are derived, argumentation?theoretical counter?strategies are identified, and the role of worldview?conformity in the success of manipulative practices is examined. Methodologically innovative is a hybrid, explainable AI approach that combines powerful transformer?based methods with inductive logical programming, so that identified patterns are not only detected but also justified (“why?explanations”). The feature lists are continuously fed back into the AI development, validated in pilot studies, and integrated into a usable toolchain with a web application. Prospectively, transferable, multilingual?expandable methods for early detection of new disinformation patterns, open educational resources, and didactic formats for democracy pedagogy and teacher training will emerge; at the same time, the consortium addresses the computing community with a toolbox and knowledge?transfer formats to establish explainable AI as a building block for a resilient, enlightened public.
Research focus of the University of Bamberg:
A hybrid, neurosymbolic AI approach is being developed for text classification with the goal of automatically identifying the anti?discursive, emotionalizing structure of disinformation and analyzing it. To this end, statistical and transformer?based natural language processing (NLP) methods are combined with Inductive Logic Programming (ILP) to generate traceable and explainable logical rules for detecting disinformation. These logical rules identify and highlight populist, anti?discursive, and emotionalizing structures. The AI approach will be made available in partnership with GI as an open?source toolbox and as a web?based application.
