Decoding Hate: Exploring Language Models' Reactions to Hate Speech

Loading...
Thumbnail Image

Identifiers

Publication date

Advisors

Other responsabilities

Journal Title

Bibliographic citation

Paloma Piot and Javier Parapar. 2025. Decoding Hate: Exploring Language Models’ Reactions to Hate Speech. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL-HLT 2025, pp. 973–990, Albuquerque, New Mexico, 29 April- 4 May 2025. Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.naacl-long.45

Type of academic work

Academic degree

Abstract

[Abstract]: Hate speech is a harmful form of online expression, often manifesting as derogatory posts. It is a significant risk in digital environments. With the rise of Large Language Models (LLMs), there is concern about their potential to replicate hate speech patterns, given their training on vast amounts of unmoderated internet data. Understanding how LLMs respond to hate speech is crucial for their responsible deployment. However, the behaviour of LLMs towards hate speech has been limited compared. This paper investigates the reactions of seven state-of-the-art LLMs (LLaMA 2, Vicuna, LLaMA 3, Mistral, GPT-3.5, GPT-4, and Gemini Pro) to hate speech. Through qualitative analysis, we aim to reveal the spectrum of responses these models produce, highlighting their capacity to handle hate speech inputs. We also discuss strategies to mitigate hate speech generation by LLMs, particularly through fine-tuning and guideline guardrailing. Finally, we explore the models’ responses to hate speech framed in politically correct language.

Description

Rights

Attribution 4.0 International
Attribution 4.0 International

Except where otherwise noted, this item's license is described as Attribution 4.0 International