Upload folder using huggingface_hub
Browse files- .ipynb_checkpoints/README-checkpoint.md +97 -0
- .ipynb_checkpoints/eole-config-checkpoint.yaml +104 -0
- README.md +97 -3
- config.json +10 -0
- eole-config.yaml +104 -0
- eole-model/config.json +150 -0
- eole-model/de.spm.model +3 -0
- eole-model/en.spm.model +3 -0
- eole-model/model.00.safetensors +3 -0
- eole-model/vocab.json +0 -0
- model.bin +3 -0
- source_vocabulary.json +0 -0
- src.spm.model +3 -0
- target_vocabulary.json +0 -0
- tgt.spm.model +3 -0
.ipynb_checkpoints/README-checkpoint.md
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- de
|
| 5 |
+
tags:
|
| 6 |
+
- translation
|
| 7 |
+
license: cc-by-4.0
|
| 8 |
+
datasets:
|
| 9 |
+
- quickmt/quickmt-train.de-en
|
| 10 |
+
model-index:
|
| 11 |
+
- name: quickmt-de-en
|
| 12 |
+
results:
|
| 13 |
+
- task:
|
| 14 |
+
name: Translation deu-eng
|
| 15 |
+
type: translation
|
| 16 |
+
args: deu-eng
|
| 17 |
+
dataset:
|
| 18 |
+
name: flores101-devtest
|
| 19 |
+
type: flores_101
|
| 20 |
+
args: deu_Latn eng_Latn devtest
|
| 21 |
+
metrics:
|
| 22 |
+
- name: CHRF
|
| 23 |
+
type: chrf
|
| 24 |
+
value: 68.83
|
| 25 |
+
- name: BLEU
|
| 26 |
+
type: bleu
|
| 27 |
+
value: 44.20
|
| 28 |
+
- name: COMET
|
| 29 |
+
type: comet
|
| 30 |
+
value: 88.88
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
# `quickmt-de-en` Neural Machine Translation Model
|
| 35 |
+
|
| 36 |
+
`quickmt-de-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `de` into `en`.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
## Model Information
|
| 40 |
+
|
| 41 |
+
* Trained using [`eole`](https://github.com/eole-nlp/eole)
|
| 42 |
+
* 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
|
| 43 |
+
* 20k separate source/target Sentencepiece vocabulary
|
| 44 |
+
* Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
|
| 45 |
+
* Training data: https://huggingface.co/datasets/quickmt/quickmt-train.de-en/tree/main
|
| 46 |
+
|
| 47 |
+
See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
|
| 48 |
+
|
| 49 |
+
## Usage with `quickmt`
|
| 50 |
+
|
| 51 |
+
You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
|
| 52 |
+
|
| 53 |
+
Next, install the `quickmt` python library and download the model:
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
git clone https://github.com/quickmt/quickmt.git
|
| 57 |
+
pip install ./quickmt/
|
| 58 |
+
|
| 59 |
+
quickmt-model-download quickmt/quickmt-de-en ./quickmt-de-en
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
Finally use the model in python:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
from quickmt import Translator
|
| 66 |
+
|
| 67 |
+
# Auto-detects GPU, set to "cpu" to force CPU inference
|
| 68 |
+
t = Translator("./quickmt-de-en/", device="auto")
|
| 69 |
+
|
| 70 |
+
# Translate - set beam size to 5 for higher quality (but slower speed)
|
| 71 |
+
sample_text = 'Dr. Ehud Ur, Professor für Medizin an der Dalhousie University in Halifax, Nova Scotia, und Vorsitzender der Abteilung für Klinik und Wissenschaft des Kanadischen Diabetesverbands gab zu bedenken, dass die Forschungsarbeit noch in den Kinderschuhen stecke.'
|
| 72 |
+
t(sample_text, beam_size=5)
|
| 73 |
+
|
| 74 |
+
> 'Dr. Ehud Ur, a professor of medicine at Dalhousie University in Halifax, Nova Scotia, and chair of the Department of Clinic and Science of the Canadian Diabetes Association, said the research is still in its infancy.'
|
| 75 |
+
|
| 76 |
+
# Get alternative translations by sampling
|
| 77 |
+
# You can pass any cTranslate2 `translate_batch` arguments, e.g.
|
| 78 |
+
t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.99)
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
## Metrics
|
| 85 |
+
|
| 86 |
+
`bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("deu_Latn"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size).
|
| 87 |
+
|
| 88 |
+
| | bleu | chrf2 | comet22 | Time (s) |
|
| 89 |
+
|:---------------------------------|-------:|--------:|----------:|-----------:|
|
| 90 |
+
| quickmt/quickmt-de-en | 44.21 | 68.83 | 88.89 | 1.03 |
|
| 91 |
+
| Helsink-NLP/opus-mt-de-en | 40.04 | 66.16 | 87.68 | 3.47 |
|
| 92 |
+
| facebook/nllb-200-distilled-600M | 42.46 | 67.07 | 88.14 | 21.36 |
|
| 93 |
+
| facebook/nllb-200-distilled-1.3B | 44.44 | 68.75 | 89.08 | 37.58 |
|
| 94 |
+
| facebook/m2m100_418M | 34.27 | 61.86 | 84.52 | 17.89 |
|
| 95 |
+
| facebook/m2m100_1.2B | 40.34 | 65.99 | 87.67 | 35.27 |
|
| 96 |
+
|
| 97 |
+
`quickmt-de-en` is the fastest and is higher quality than `opus-mt-de-en`, `m2m100_418m`, `m2m100_1.2B` and `nllb-200-distilled-600M`.
|
.ipynb_checkpoints/eole-config-checkpoint.yaml
ADDED
|
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## IO
|
| 2 |
+
save_data: de-en/data
|
| 3 |
+
overwrite: True
|
| 4 |
+
seed: 1234
|
| 5 |
+
report_every: 100
|
| 6 |
+
valid_metrics: ["BLEU"]
|
| 7 |
+
tensorboard: true
|
| 8 |
+
tensorboard_log_dir: tensorboard
|
| 9 |
+
|
| 10 |
+
### Vocab
|
| 11 |
+
src_vocab: de.eole.vocab
|
| 12 |
+
tgt_vocab: en.eole.vocab
|
| 13 |
+
src_vocab_size: 20000
|
| 14 |
+
tgt_vocab_size: 20000
|
| 15 |
+
vocab_size_multiple: 8
|
| 16 |
+
share_vocab: false
|
| 17 |
+
n_sample: 0
|
| 18 |
+
|
| 19 |
+
data:
|
| 20 |
+
corpus_1:
|
| 21 |
+
path_src: hf://quickmt/quickmt-train.de-en/de
|
| 22 |
+
path_tgt: hf://quickmt/quickmt-train.de-en/en
|
| 23 |
+
path_sco: hf://quickmt/quickmt-train.de-en/sco
|
| 24 |
+
# path_src: en-ko/train.cleaned.src
|
| 25 |
+
# path_tgt: en-ko/train.cleaned.tgt
|
| 26 |
+
valid:
|
| 27 |
+
path_src: dev.de
|
| 28 |
+
path_tgt: dev.en
|
| 29 |
+
|
| 30 |
+
transforms: [sentencepiece, filtertoolong]
|
| 31 |
+
transforms_configs:
|
| 32 |
+
sentencepiece:
|
| 33 |
+
src_subword_model: "de.spm.model"
|
| 34 |
+
tgt_subword_model: "en.spm.model"
|
| 35 |
+
filtertoolong:
|
| 36 |
+
src_seq_length: 256
|
| 37 |
+
tgt_seq_length: 256
|
| 38 |
+
|
| 39 |
+
training:
|
| 40 |
+
# Run configuration
|
| 41 |
+
model_path: model
|
| 42 |
+
train_from: model
|
| 43 |
+
keep_checkpoint: 4
|
| 44 |
+
save_checkpoint_steps: 1000
|
| 45 |
+
train_steps: 200000
|
| 46 |
+
valid_steps: 1000
|
| 47 |
+
|
| 48 |
+
# Train on a single GPU
|
| 49 |
+
world_size: 1
|
| 50 |
+
gpu_ranks: [0]
|
| 51 |
+
|
| 52 |
+
# Batching
|
| 53 |
+
batch_type: "tokens"
|
| 54 |
+
batch_size: 8192
|
| 55 |
+
valid_batch_size: 8192
|
| 56 |
+
batch_size_multiple: 8
|
| 57 |
+
accum_count: [16]
|
| 58 |
+
accum_steps: [0]
|
| 59 |
+
|
| 60 |
+
# Optimizer & Compute
|
| 61 |
+
compute_dtype: "fp16"
|
| 62 |
+
#use_amp: true
|
| 63 |
+
optim: "pagedadamw8bit"
|
| 64 |
+
learning_rate: 2.0
|
| 65 |
+
warmup_steps: 5000
|
| 66 |
+
decay_method: "noam"
|
| 67 |
+
adam_beta2: 0.998
|
| 68 |
+
|
| 69 |
+
# Data loading
|
| 70 |
+
bucket_size: 128000
|
| 71 |
+
num_workers: 4
|
| 72 |
+
prefetch_factor: 32
|
| 73 |
+
|
| 74 |
+
# Hyperparams
|
| 75 |
+
dropout_steps: [0]
|
| 76 |
+
dropout: [0.1]
|
| 77 |
+
attention_dropout: [0]
|
| 78 |
+
max_grad_norm: 2
|
| 79 |
+
label_smoothing: 0.1
|
| 80 |
+
average_decay: 0.0001
|
| 81 |
+
param_init_method: xavier_uniform
|
| 82 |
+
normalization: "tokens"
|
| 83 |
+
|
| 84 |
+
model:
|
| 85 |
+
architecture: "transformer"
|
| 86 |
+
layer_norm: standard
|
| 87 |
+
share_embeddings: false
|
| 88 |
+
share_decoder_embeddings: true
|
| 89 |
+
add_ffnbias: true
|
| 90 |
+
mlp_activation_fn: gelu
|
| 91 |
+
add_estimator: false
|
| 92 |
+
add_qkvbias: false
|
| 93 |
+
norm_eps: 1e-6
|
| 94 |
+
hidden_size: 1024
|
| 95 |
+
encoder:
|
| 96 |
+
layers: 8
|
| 97 |
+
decoder:
|
| 98 |
+
layers: 2
|
| 99 |
+
heads: 8
|
| 100 |
+
transformer_ff: 4096
|
| 101 |
+
embeddings:
|
| 102 |
+
word_vec_size: 1024
|
| 103 |
+
position_encoding_type: "SinusoidalInterleaved"
|
| 104 |
+
|
README.md
CHANGED
|
@@ -1,3 +1,97 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- de
|
| 5 |
+
tags:
|
| 6 |
+
- translation
|
| 7 |
+
license: cc-by-4.0
|
| 8 |
+
datasets:
|
| 9 |
+
- quickmt/quickmt-train.de-en
|
| 10 |
+
model-index:
|
| 11 |
+
- name: quickmt-de-en
|
| 12 |
+
results:
|
| 13 |
+
- task:
|
| 14 |
+
name: Translation deu-eng
|
| 15 |
+
type: translation
|
| 16 |
+
args: deu-eng
|
| 17 |
+
dataset:
|
| 18 |
+
name: flores101-devtest
|
| 19 |
+
type: flores_101
|
| 20 |
+
args: deu_Latn eng_Latn devtest
|
| 21 |
+
metrics:
|
| 22 |
+
- name: CHRF
|
| 23 |
+
type: chrf
|
| 24 |
+
value: 68.83
|
| 25 |
+
- name: BLEU
|
| 26 |
+
type: bleu
|
| 27 |
+
value: 44.20
|
| 28 |
+
- name: COMET
|
| 29 |
+
type: comet
|
| 30 |
+
value: 88.88
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
# `quickmt-de-en` Neural Machine Translation Model
|
| 35 |
+
|
| 36 |
+
`quickmt-de-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `de` into `en`.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
## Model Information
|
| 40 |
+
|
| 41 |
+
* Trained using [`eole`](https://github.com/eole-nlp/eole)
|
| 42 |
+
* 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
|
| 43 |
+
* 20k separate source/target Sentencepiece vocabulary
|
| 44 |
+
* Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
|
| 45 |
+
* Training data: https://huggingface.co/datasets/quickmt/quickmt-train.de-en/tree/main
|
| 46 |
+
|
| 47 |
+
See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
|
| 48 |
+
|
| 49 |
+
## Usage with `quickmt`
|
| 50 |
+
|
| 51 |
+
You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
|
| 52 |
+
|
| 53 |
+
Next, install the `quickmt` python library and download the model:
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
git clone https://github.com/quickmt/quickmt.git
|
| 57 |
+
pip install ./quickmt/
|
| 58 |
+
|
| 59 |
+
quickmt-model-download quickmt/quickmt-de-en ./quickmt-de-en
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
Finally use the model in python:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
from quickmt import Translator
|
| 66 |
+
|
| 67 |
+
# Auto-detects GPU, set to "cpu" to force CPU inference
|
| 68 |
+
t = Translator("./quickmt-de-en/", device="auto")
|
| 69 |
+
|
| 70 |
+
# Translate - set beam size to 5 for higher quality (but slower speed)
|
| 71 |
+
sample_text = 'Dr. Ehud Ur, Professor für Medizin an der Dalhousie University in Halifax, Nova Scotia, und Vorsitzender der Abteilung für Klinik und Wissenschaft des Kanadischen Diabetesverbands gab zu bedenken, dass die Forschungsarbeit noch in den Kinderschuhen stecke.'
|
| 72 |
+
t(sample_text, beam_size=5)
|
| 73 |
+
|
| 74 |
+
> 'Dr. Ehud Ur, a professor of medicine at Dalhousie University in Halifax, Nova Scotia, and chair of the Department of Clinic and Science of the Canadian Diabetes Association, said the research is still in its infancy.'
|
| 75 |
+
|
| 76 |
+
# Get alternative translations by sampling
|
| 77 |
+
# You can pass any cTranslate2 `translate_batch` arguments, e.g.
|
| 78 |
+
t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.99)
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
## Metrics
|
| 85 |
+
|
| 86 |
+
`bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("deu_Latn"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size).
|
| 87 |
+
|
| 88 |
+
| | bleu | chrf2 | comet22 | Time (s) |
|
| 89 |
+
|:---------------------------------|-------:|--------:|----------:|-----------:|
|
| 90 |
+
| quickmt/quickmt-de-en | 44.21 | 68.83 | 88.89 | 1.03 |
|
| 91 |
+
| Helsink-NLP/opus-mt-de-en | 40.04 | 66.16 | 87.68 | 3.47 |
|
| 92 |
+
| facebook/nllb-200-distilled-600M | 42.46 | 67.07 | 88.14 | 21.36 |
|
| 93 |
+
| facebook/nllb-200-distilled-1.3B | 44.44 | 68.75 | 89.08 | 37.58 |
|
| 94 |
+
| facebook/m2m100_418M | 34.27 | 61.86 | 84.52 | 17.89 |
|
| 95 |
+
| facebook/m2m100_1.2B | 40.34 | 65.99 | 87.67 | 35.27 |
|
| 96 |
+
|
| 97 |
+
`quickmt-de-en` is the fastest and is higher quality than `opus-mt-de-en`, `m2m100_418m`, `m2m100_1.2B` and `nllb-200-distilled-600M`.
|
config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_source_bos": false,
|
| 3 |
+
"add_source_eos": false,
|
| 4 |
+
"bos_token": "<s>",
|
| 5 |
+
"decoder_start_token": "<s>",
|
| 6 |
+
"eos_token": "</s>",
|
| 7 |
+
"layer_norm_epsilon": 1e-06,
|
| 8 |
+
"multi_query_attention": false,
|
| 9 |
+
"unk_token": "<unk>"
|
| 10 |
+
}
|
eole-config.yaml
ADDED
|
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## IO
|
| 2 |
+
save_data: de-en/data
|
| 3 |
+
overwrite: True
|
| 4 |
+
seed: 1234
|
| 5 |
+
report_every: 100
|
| 6 |
+
valid_metrics: ["BLEU"]
|
| 7 |
+
tensorboard: true
|
| 8 |
+
tensorboard_log_dir: tensorboard
|
| 9 |
+
|
| 10 |
+
### Vocab
|
| 11 |
+
src_vocab: de.eole.vocab
|
| 12 |
+
tgt_vocab: en.eole.vocab
|
| 13 |
+
src_vocab_size: 20000
|
| 14 |
+
tgt_vocab_size: 20000
|
| 15 |
+
vocab_size_multiple: 8
|
| 16 |
+
share_vocab: false
|
| 17 |
+
n_sample: 0
|
| 18 |
+
|
| 19 |
+
data:
|
| 20 |
+
corpus_1:
|
| 21 |
+
path_src: hf://quickmt/quickmt-train.de-en/de
|
| 22 |
+
path_tgt: hf://quickmt/quickmt-train.de-en/en
|
| 23 |
+
path_sco: hf://quickmt/quickmt-train.de-en/sco
|
| 24 |
+
# path_src: en-ko/train.cleaned.src
|
| 25 |
+
# path_tgt: en-ko/train.cleaned.tgt
|
| 26 |
+
valid:
|
| 27 |
+
path_src: dev.de
|
| 28 |
+
path_tgt: dev.en
|
| 29 |
+
|
| 30 |
+
transforms: [sentencepiece, filtertoolong]
|
| 31 |
+
transforms_configs:
|
| 32 |
+
sentencepiece:
|
| 33 |
+
src_subword_model: "de.spm.model"
|
| 34 |
+
tgt_subword_model: "en.spm.model"
|
| 35 |
+
filtertoolong:
|
| 36 |
+
src_seq_length: 256
|
| 37 |
+
tgt_seq_length: 256
|
| 38 |
+
|
| 39 |
+
training:
|
| 40 |
+
# Run configuration
|
| 41 |
+
model_path: model
|
| 42 |
+
train_from: model
|
| 43 |
+
keep_checkpoint: 4
|
| 44 |
+
save_checkpoint_steps: 1000
|
| 45 |
+
train_steps: 200000
|
| 46 |
+
valid_steps: 1000
|
| 47 |
+
|
| 48 |
+
# Train on a single GPU
|
| 49 |
+
world_size: 1
|
| 50 |
+
gpu_ranks: [0]
|
| 51 |
+
|
| 52 |
+
# Batching
|
| 53 |
+
batch_type: "tokens"
|
| 54 |
+
batch_size: 8192
|
| 55 |
+
valid_batch_size: 8192
|
| 56 |
+
batch_size_multiple: 8
|
| 57 |
+
accum_count: [16]
|
| 58 |
+
accum_steps: [0]
|
| 59 |
+
|
| 60 |
+
# Optimizer & Compute
|
| 61 |
+
compute_dtype: "fp16"
|
| 62 |
+
#use_amp: true
|
| 63 |
+
optim: "pagedadamw8bit"
|
| 64 |
+
learning_rate: 2.0
|
| 65 |
+
warmup_steps: 5000
|
| 66 |
+
decay_method: "noam"
|
| 67 |
+
adam_beta2: 0.998
|
| 68 |
+
|
| 69 |
+
# Data loading
|
| 70 |
+
bucket_size: 128000
|
| 71 |
+
num_workers: 4
|
| 72 |
+
prefetch_factor: 32
|
| 73 |
+
|
| 74 |
+
# Hyperparams
|
| 75 |
+
dropout_steps: [0]
|
| 76 |
+
dropout: [0.1]
|
| 77 |
+
attention_dropout: [0]
|
| 78 |
+
max_grad_norm: 2
|
| 79 |
+
label_smoothing: 0.1
|
| 80 |
+
average_decay: 0.0001
|
| 81 |
+
param_init_method: xavier_uniform
|
| 82 |
+
normalization: "tokens"
|
| 83 |
+
|
| 84 |
+
model:
|
| 85 |
+
architecture: "transformer"
|
| 86 |
+
layer_norm: standard
|
| 87 |
+
share_embeddings: false
|
| 88 |
+
share_decoder_embeddings: true
|
| 89 |
+
add_ffnbias: true
|
| 90 |
+
mlp_activation_fn: gelu
|
| 91 |
+
add_estimator: false
|
| 92 |
+
add_qkvbias: false
|
| 93 |
+
norm_eps: 1e-6
|
| 94 |
+
hidden_size: 1024
|
| 95 |
+
encoder:
|
| 96 |
+
layers: 8
|
| 97 |
+
decoder:
|
| 98 |
+
layers: 2
|
| 99 |
+
heads: 8
|
| 100 |
+
transformer_ff: 4096
|
| 101 |
+
embeddings:
|
| 102 |
+
word_vec_size: 1024
|
| 103 |
+
position_encoding_type: "SinusoidalInterleaved"
|
| 104 |
+
|
eole-model/config.json
ADDED
|
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"report_every": 100,
|
| 3 |
+
"tensorboard": true,
|
| 4 |
+
"tgt_vocab_size": 20000,
|
| 5 |
+
"tensorboard_log_dir": "tensorboard",
|
| 6 |
+
"tensorboard_log_dir_dated": "tensorboard/Mar-16_23-09-35",
|
| 7 |
+
"tgt_vocab": "en.eole.vocab",
|
| 8 |
+
"valid_metrics": [
|
| 9 |
+
"BLEU"
|
| 10 |
+
],
|
| 11 |
+
"vocab_size_multiple": 8,
|
| 12 |
+
"src_vocab": "de.eole.vocab",
|
| 13 |
+
"src_vocab_size": 20000,
|
| 14 |
+
"transforms": [
|
| 15 |
+
"sentencepiece",
|
| 16 |
+
"filtertoolong"
|
| 17 |
+
],
|
| 18 |
+
"save_data": "de-en/data",
|
| 19 |
+
"share_vocab": false,
|
| 20 |
+
"n_sample": 0,
|
| 21 |
+
"overwrite": true,
|
| 22 |
+
"seed": 1234,
|
| 23 |
+
"training": {
|
| 24 |
+
"num_workers": 0,
|
| 25 |
+
"normalization": "tokens",
|
| 26 |
+
"learning_rate": 2.0,
|
| 27 |
+
"bucket_size": 128000,
|
| 28 |
+
"train_steps": 200000,
|
| 29 |
+
"world_size": 1,
|
| 30 |
+
"accum_count": [
|
| 31 |
+
16
|
| 32 |
+
],
|
| 33 |
+
"param_init_method": "xavier_uniform",
|
| 34 |
+
"max_grad_norm": 2.0,
|
| 35 |
+
"optim": "pagedadamw8bit",
|
| 36 |
+
"decay_method": "noam",
|
| 37 |
+
"batch_size_multiple": 8,
|
| 38 |
+
"gpu_ranks": [
|
| 39 |
+
0
|
| 40 |
+
],
|
| 41 |
+
"label_smoothing": 0.1,
|
| 42 |
+
"warmup_steps": 5000,
|
| 43 |
+
"adam_beta2": 0.998,
|
| 44 |
+
"batch_type": "tokens",
|
| 45 |
+
"dropout": [
|
| 46 |
+
0.1
|
| 47 |
+
],
|
| 48 |
+
"accum_steps": [
|
| 49 |
+
0
|
| 50 |
+
],
|
| 51 |
+
"prefetch_factor": 32,
|
| 52 |
+
"batch_size": 8192,
|
| 53 |
+
"average_decay": 0.0001,
|
| 54 |
+
"save_checkpoint_steps": 1000,
|
| 55 |
+
"valid_batch_size": 8192,
|
| 56 |
+
"model_path": "model",
|
| 57 |
+
"valid_steps": 1000,
|
| 58 |
+
"train_from": "model",
|
| 59 |
+
"attention_dropout": [
|
| 60 |
+
0.0
|
| 61 |
+
],
|
| 62 |
+
"dropout_steps": [
|
| 63 |
+
0
|
| 64 |
+
],
|
| 65 |
+
"keep_checkpoint": 4,
|
| 66 |
+
"compute_dtype": "torch.float16"
|
| 67 |
+
},
|
| 68 |
+
"model": {
|
| 69 |
+
"add_estimator": false,
|
| 70 |
+
"mlp_activation_fn": "gelu",
|
| 71 |
+
"share_embeddings": false,
|
| 72 |
+
"norm_eps": 1e-06,
|
| 73 |
+
"transformer_ff": 4096,
|
| 74 |
+
"heads": 8,
|
| 75 |
+
"share_decoder_embeddings": true,
|
| 76 |
+
"layer_norm": "standard",
|
| 77 |
+
"add_qkvbias": false,
|
| 78 |
+
"hidden_size": 1024,
|
| 79 |
+
"add_ffnbias": true,
|
| 80 |
+
"architecture": "transformer",
|
| 81 |
+
"position_encoding_type": "SinusoidalInterleaved",
|
| 82 |
+
"embeddings": {
|
| 83 |
+
"word_vec_size": 1024,
|
| 84 |
+
"tgt_word_vec_size": 1024,
|
| 85 |
+
"position_encoding_type": "SinusoidalInterleaved",
|
| 86 |
+
"src_word_vec_size": 1024
|
| 87 |
+
},
|
| 88 |
+
"encoder": {
|
| 89 |
+
"encoder_type": "transformer",
|
| 90 |
+
"heads": 8,
|
| 91 |
+
"layer_norm": "standard",
|
| 92 |
+
"hidden_size": 1024,
|
| 93 |
+
"add_qkvbias": false,
|
| 94 |
+
"mlp_activation_fn": "gelu",
|
| 95 |
+
"add_ffnbias": true,
|
| 96 |
+
"n_positions": null,
|
| 97 |
+
"norm_eps": 1e-06,
|
| 98 |
+
"layers": 8,
|
| 99 |
+
"src_word_vec_size": 1024,
|
| 100 |
+
"transformer_ff": 4096,
|
| 101 |
+
"position_encoding_type": "SinusoidalInterleaved"
|
| 102 |
+
},
|
| 103 |
+
"decoder": {
|
| 104 |
+
"layer_norm": "standard",
|
| 105 |
+
"heads": 8,
|
| 106 |
+
"add_qkvbias": false,
|
| 107 |
+
"hidden_size": 1024,
|
| 108 |
+
"decoder_type": "transformer",
|
| 109 |
+
"mlp_activation_fn": "gelu",
|
| 110 |
+
"tgt_word_vec_size": 1024,
|
| 111 |
+
"add_ffnbias": true,
|
| 112 |
+
"n_positions": null,
|
| 113 |
+
"norm_eps": 1e-06,
|
| 114 |
+
"layers": 2,
|
| 115 |
+
"transformer_ff": 4096,
|
| 116 |
+
"position_encoding_type": "SinusoidalInterleaved"
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
"transforms_configs": {
|
| 120 |
+
"sentencepiece": {
|
| 121 |
+
"src_subword_model": "${MODEL_PATH}/de.spm.model",
|
| 122 |
+
"tgt_subword_model": "${MODEL_PATH}/en.spm.model"
|
| 123 |
+
},
|
| 124 |
+
"filtertoolong": {
|
| 125 |
+
"src_seq_length": 256,
|
| 126 |
+
"tgt_seq_length": 256
|
| 127 |
+
}
|
| 128 |
+
},
|
| 129 |
+
"data": {
|
| 130 |
+
"corpus_1": {
|
| 131 |
+
"transforms": [
|
| 132 |
+
"sentencepiece",
|
| 133 |
+
"filtertoolong"
|
| 134 |
+
],
|
| 135 |
+
"path_tgt": "hf://quickmt/quickmt-train.de-en/en",
|
| 136 |
+
"path_src": "hf://quickmt/quickmt-train.de-en/de",
|
| 137 |
+
"path_sco": "hf://quickmt/quickmt-train.de-en/sco",
|
| 138 |
+
"path_align": null
|
| 139 |
+
},
|
| 140 |
+
"valid": {
|
| 141 |
+
"transforms": [
|
| 142 |
+
"sentencepiece",
|
| 143 |
+
"filtertoolong"
|
| 144 |
+
],
|
| 145 |
+
"path_src": "dev.de",
|
| 146 |
+
"path_tgt": "dev.en",
|
| 147 |
+
"path_align": null
|
| 148 |
+
}
|
| 149 |
+
}
|
| 150 |
+
}
|
eole-model/de.spm.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a4a9e3dd26d6562764eb2678f95766afbab48354ddb5b7d63cb7cf7a02cf56a
|
| 3 |
+
size 595491
|
eole-model/en.spm.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eaf7051931aaca182561ba5a63c5674424971d0d6001ab57facb194fe6c941bf
|
| 3 |
+
size 586141
|
eole-model/model.00.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:704125ca03fb0d665827bfed8205b284555834b37dd85993b761880d6204c51d
|
| 3 |
+
size 742170000
|
eole-model/vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:50b1a9c0d09d49b71242750ccb8b68d312d976302b7989157b07f7df1ef8dcf3
|
| 3 |
+
size 360843109
|
source_vocabulary.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
src.spm.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a4a9e3dd26d6562764eb2678f95766afbab48354ddb5b7d63cb7cf7a02cf56a
|
| 3 |
+
size 595491
|
target_vocabulary.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tgt.spm.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eaf7051931aaca182561ba5a63c5674424971d0d6001ab57facb194fe6c941bf
|
| 3 |
+
size 586141
|