TY - GEN
T1 - A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media
AU - Mozafari, Marzieh
AU - Farahbakhsh, Reza
AU - Crespi, Noël
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model.
AB - Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model.
KW - BERT
KW - Fine-tuning
KW - Hate speech detection
KW - Language modeling
KW - NLP
KW - Social media
KW - Transfer learning
UR - https://www.scopus.com/pages/publications/85076696813
U2 - 10.1007/978-3-030-36687-2_77
DO - 10.1007/978-3-030-36687-2_77
M3 - Conference contribution
AN - SCOPUS:85076696813
SN - 9783030366865
T3 - Studies in Computational Intelligence
SP - 928
EP - 940
BT - Complex Networks and Their Applications VIII - Volume 1 Proceedings of the 8th International Conference on Complex Networks and Their Applications, COMPLEX NETWORKS 2019
A2 - Cherifi, Hocine
A2 - Gaito, Sabrina
A2 - Mendes, José Fernendo
A2 - Moro, Esteban
A2 - Rocha, Luis Mateus
PB - Springer
T2 - 8th International Conference on Complex Networks and their Applications, COMPLEX NETWORKS 2019
Y2 - 10 December 2019 through 12 December 2019
ER -