Four papers presented at ACL
BamNLP presents four papers at the ACL conference in Bangkok, Thailand next week.
We studied how the description of scientific findings changes, when they are reported in news or social media. Are correlations reported as causal relations? Is the certainty of the finding reported with higher confidence? Does the result reporting change in sensationalism, and does the the report over-generalize?
Models that learn to automatically detect hate speech rely on the mention of particular target groups. For instance, a model may predict that a post contains hate speech, because the post mentions a particular religion. Removing the ability of the model to rely on such targets might, however, lead to a lower performance of the model, because the existance of a target is a crucial element of hate speech. In this paper, we test if correcting for a group of targets (e.g., all religions, all genders) improves the generalizability of the model.
Reviews often mention entities or aspects that are evaluated, but not every evaluation in a text is relevant for a global, text-level judgement. This paper studies the relation between local and global sentiment assignments.
Amelie Wuehrl, Lynn Greschner, Yarik Menchaca Resendiz, and Roman Klinger. IMS_medicALY at #SMM4H 2024: Detecting impacts of outdoor spaces on social anxiety with data augmented ensembling. In The 9th Social Media Mining for Health Research and Applications Workshop and Shared Tasks (#SMM4H 2024)--Large Language Models and Generalizability for Social Media NLP at ACL 2024, Bangkok, Thailand, 2024. Association for Computational Linguistics.
This paper is a shared task contribution. We test if models that measure social anxiety can be improved by artifical data, and domain-specific models.