North, Kai, Dmonte, Alphaeus, Ranasinghe, Tharindu, Shardlow, Matthew ORCID: https://orcid.org/0000-0003-1129-2750 and Zampieri, Marcos (2023) ALEXSIS+: improving substitute generation and selection for lexical simplification with information retrieval. In: 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), 13 July 2023, Toronto, Canada.
|
Published Version
Available under License Creative Commons Attribution. Download (194kB) | Preview |
Abstract
Lexical simplification (LS) automatically replaces words that are deemed difficult to understand for a given target population with simpler alternatives, whilst preserving the meaning of the original sentence. The TSAR-2022 shared task on LS provided participants with a multilingual lexical simplification test set. It contained nearly 1,200 complex words in English, Portuguese, and Spanish and presented multiple candidate substitutions for each complex word. The competition did not make training data available; therefore, teams had to use either off-the-shelf pre-trained large language models (LLMs) or out-domain data to develop their LS systems. As such, participants were unable to fully explore the capabilities of LLMs by re-training and/or fine-tuning them on in-domain data. To address this important limitation, we present ALEXSIS+, a multilingual dataset in the aforementioned three languages, and ALEXSIS++, an English monolingual dataset that together contains more than 50,000 unique sentences retrieved from news corpora and annotated with cosine similarities to the original complex word and sentence. Using these additional contexts, we are able to generate new high-quality candidate substitutions that improve LS performance on the TSAR-2022 test set regardless of the language or model.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.