BERTweetFR: Domain Adaptation of Pre-Trained Language Models for French Tweets

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We introduce BERTweetFR, the first large-scale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.

Original languageEnglish
Title of host publicationW-NUT 2021 - 7th Workshop on Noisy User-Generated Text, Proceedings of the Conference
EditorsWei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
PublisherAssociation for Computational Linguistics (ACL)
Pages445-450
Number of pages6
ISBN (Electronic)9781954085909
Publication statusPublished - 1 Jan 2021
Externally publishedYes
Event7th Workshop on Noisy User-Generated Text, W-NUT 2021 - Virtual, Online
Duration: 11 Nov 2021 → …

Publication series

NameW-NUT 2021 - 7th Workshop on Noisy User-Generated Text, Proceedings of the Conference

Conference

Conference7th Workshop on Noisy User-Generated Text, W-NUT 2021
CityVirtual, Online
Period11/11/21 → …

Fingerprint

Dive into the research topics of 'BERTweetFR: Domain Adaptation of Pre-Trained Language Models for French Tweets'. Together they form a unique fingerprint.

Cite this