Datasets:
Tasks:
Fill-Mask
Formats:
csv
Sub-tasks:
masked-language-modeling
Size:
1M - 10M
ArXiv:
Tags:
afrolm
active learning
language modeling
research papers
natural language processing
self-active learning
License:
Commit
·
7322fc3
1
Parent(s):
a538c9b
Upload 2 files
Browse files- README.md +48 -0
- afrolm.png +3 -0
README.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
|
| 2 |
+
|
| 3 |
+
This repository contains the code for our paper `AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages` which will appear at the third Simple and Efficient Natural Language Processing, at EMNLP 2022.
|
| 4 |
+
|
| 5 |
+
## Our self-active learning framework
|
| 6 |
+

|
| 7 |
+
|
| 8 |
+
## Languages Covered
|
| 9 |
+
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
|
| 10 |
+
|
| 11 |
+
## Evaluation Results
|
| 12 |
+
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets; on text classification and sentiment analysis. AfroLM outperformed AfriBERTa, mBERT, and XLMR-base, and was very competitive with AfroXLMR. AfroLM is also very data efficient because it was pretrained on a dataset 14x+ smaller than its competitors' datasets. Below is the average of performance of various models, across various datasets. Please consult our paper for more language-level performance.
|
| 13 |
+
|
| 14 |
+
Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Sentiment Analysis (YOSM) | OOD Sentiment Analysis (Twitter -> YOSM) |
|
| 15 |
+
|:---: |:---: |:---: | :---: |:---: | :---: |
|
| 16 |
+
`AfroLM-Large` | **80.13** | **83.26** | **82.90/91.00** | **85.40** | **68.70** |
|
| 17 |
+
`AfriBERTa` | 79.10 | 81.31 | 83.22/90.86 | 82.70 | 65.90 |
|
| 18 |
+
`mBERT` | 71.55 | 80.68 | --- | --- | --- |
|
| 19 |
+
`XLMR-base` | 79.16 | 83.09 | --- | --- | --- |
|
| 20 |
+
`AfroXLMR-base` | `81.90` | `84.55` | --- | --- | --- |
|
| 21 |
+
|
| 22 |
+
- (*) The evaluation was made on the 11 additional languages of the dataset.
|
| 23 |
+
## Pretrained Models and Dataset
|
| 24 |
+
|
| 25 |
+
**Models:**: [AfroLM-Large](https://huggingface.co/bonadossou/afrolm_active_learning) and **Dataset**: [AfroLM Dataset](https://huggingface.co/datasets/bonadossou/afrolm_active_learning_dataset)
|
| 26 |
+
|
| 27 |
+
## HuggingFace usage of AfroLM-large
|
| 28 |
+
```python
|
| 29 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
| 30 |
+
model = AutoModelForTokenClassification.from_pretrained("bonadossou/afrolm_active_learning")
|
| 31 |
+
tokenizer = AutoTokenizer.from_pretrained("bonadossou/afrolm_active_learning")
|
| 32 |
+
tokenizer.model_max_length = 256
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
## Reproducing our result: Training and Evaluation
|
| 36 |
+
|
| 37 |
+
- To train the network, run `python active_learning.py`. You can also wrap it around a `bash` script.
|
| 38 |
+
- For the evaluation:
|
| 39 |
+
- NER Classification: `bash ner_experiments.sh`
|
| 40 |
+
- Text Classification & Sentiment Analysis: `bash text_classification_all.sh`
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
## Citation
|
| 44 |
+
We will share the proceeding citation this as soon as possible. Stay tuned. If you have liked our work, give it a star.
|
| 45 |
+
|
| 46 |
+
## Reach out
|
| 47 |
+
|
| 48 |
+
Do you have a question? Please create an issue and we will reach out as soon as possible
|
afrolm.png
ADDED
|
Git LFS Details
|