AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive Summarization

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Like most natural language understanding and generation tasks, state-of-the-art models for summarization are transformer-based sequence-to-sequence architectures that are pretrained on large corpora. While most existing models focus on English, Arabic remains understudied. In this paper we propose AraBART, the first Arabic model in which the encoder and the decoder are pretrained end-to-end, based on BART (Lewis et al., 2020). We show that AraBART achieves the best performance on multiple abstractive summarization datasets, outperforming strong baselines including a pretrained Arabic BERT-based model, multilingual BART, Arabic T5, and a multilingual T5 model. AraBART is publicly available on github and the Hugging Face model hub.

Original languageEnglish
Title of host publicationWANLP 2022 - 7th Arabic Natural Language Processing - Proceedings of the Workshop
PublisherAssociation for Computational Linguistics (ACL)
Pages31-42
Number of pages12
ISBN (Electronic)9781959429272
Publication statusPublished - 1 Jan 2022
Externally publishedYes
Event7th Arabic Natural Language Processing Workshop, WANLP 2022 held with EMNLP 2022 - Abu Dhabi, United Arab Emirates
Duration: 8 Dec 2022 → …

Publication series

NameWANLP 2022 - 7th Arabic Natural Language Processing - Proceedings of the Workshop

Conference

Conference7th Arabic Natural Language Processing Workshop, WANLP 2022 held with EMNLP 2022
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period8/12/22 → …

Fingerprint

Dive into the research topics of 'AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive Summarization'. Together they form a unique fingerprint.

Cite this