omarkamali commited on
Commit
1566b18
·
verified ·
1 Parent(s): a6db3d4

Upload all models and assets for ch (latest)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. README.md +175 -138
  3. models/embeddings/aligned/ch_128d.bin +3 -0
  4. models/embeddings/aligned/ch_128d.meta.json +1 -0
  5. models/embeddings/aligned/ch_128d.projection.npy +3 -0
  6. models/embeddings/aligned/ch_128d_metadata.json +8 -0
  7. models/embeddings/aligned/ch_32d.bin +3 -0
  8. models/embeddings/aligned/ch_32d.meta.json +1 -0
  9. models/embeddings/aligned/ch_32d.projection.npy +3 -0
  10. models/embeddings/aligned/ch_32d_metadata.json +8 -0
  11. models/embeddings/aligned/ch_64d.bin +3 -0
  12. models/embeddings/aligned/ch_64d.meta.json +1 -0
  13. models/embeddings/aligned/ch_64d.projection.npy +3 -0
  14. models/embeddings/aligned/ch_64d_metadata.json +8 -0
  15. models/embeddings/monolingual/ch_128d.bin +2 -2
  16. models/embeddings/monolingual/ch_128d_metadata.json +1 -1
  17. models/embeddings/monolingual/ch_32d.bin +2 -2
  18. models/embeddings/monolingual/ch_32d_metadata.json +1 -1
  19. models/embeddings/monolingual/ch_64d.bin +2 -2
  20. models/embeddings/monolingual/ch_64d_metadata.json +1 -1
  21. models/subword_markov/ch_markov_ctx1_subword.parquet +2 -2
  22. models/subword_markov/ch_markov_ctx1_subword_metadata.json +2 -2
  23. models/subword_markov/ch_markov_ctx2_subword.parquet +2 -2
  24. models/subword_markov/ch_markov_ctx2_subword_metadata.json +2 -2
  25. models/subword_markov/ch_markov_ctx3_subword.parquet +2 -2
  26. models/subword_markov/ch_markov_ctx3_subword_metadata.json +2 -2
  27. models/subword_markov/ch_markov_ctx4_subword.parquet +2 -2
  28. models/subword_markov/ch_markov_ctx4_subword_metadata.json +2 -2
  29. models/subword_ngram/ch_2gram_subword.parquet +2 -2
  30. models/subword_ngram/ch_2gram_subword_metadata.json +2 -2
  31. models/subword_ngram/ch_3gram_subword.parquet +2 -2
  32. models/subword_ngram/ch_3gram_subword_metadata.json +2 -2
  33. models/subword_ngram/ch_4gram_subword.parquet +2 -2
  34. models/subword_ngram/ch_4gram_subword_metadata.json +2 -2
  35. models/subword_ngram/ch_5gram_subword.parquet +3 -0
  36. models/subword_ngram/ch_5gram_subword_metadata.json +7 -0
  37. models/tokenizer/ch_tokenizer_16k.model +2 -2
  38. models/tokenizer/ch_tokenizer_16k.vocab +0 -0
  39. models/tokenizer/ch_tokenizer_8k.model +2 -2
  40. models/tokenizer/ch_tokenizer_8k.vocab +0 -0
  41. models/vocabulary/ch_vocabulary.parquet +2 -2
  42. models/vocabulary/ch_vocabulary_metadata.json +7 -7
  43. models/word_markov/ch_markov_ctx1_word.parquet +2 -2
  44. models/word_markov/ch_markov_ctx1_word_metadata.json +2 -2
  45. models/word_markov/ch_markov_ctx2_word.parquet +2 -2
  46. models/word_markov/ch_markov_ctx2_word_metadata.json +2 -2
  47. models/word_markov/ch_markov_ctx3_word.parquet +2 -2
  48. models/word_markov/ch_markov_ctx3_word_metadata.json +2 -2
  49. models/word_markov/ch_markov_ctx4_word.parquet +2 -2
  50. models/word_markov/ch_markov_ctx4_word_metadata.json +2 -2
.gitattributes CHANGED
@@ -39,3 +39,4 @@ visualizations/position_encoding_comparison.png filter=lfs diff=lfs merge=lfs -t
39
  visualizations/tsne_sentences.png filter=lfs diff=lfs merge=lfs -text
40
  visualizations/tsne_words.png filter=lfs diff=lfs merge=lfs -text
41
  visualizations/zipf_law.png filter=lfs diff=lfs merge=lfs -text
 
 
39
  visualizations/tsne_sentences.png filter=lfs diff=lfs merge=lfs -text
40
  visualizations/tsne_words.png filter=lfs diff=lfs merge=lfs -text
41
  visualizations/zipf_law.png filter=lfs diff=lfs merge=lfs -text
42
+ visualizations/embedding_tsne_multilingual.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: ch
3
- language_name: CH
4
  language_family: austronesian_oceanic_other
5
  tags:
6
  - wikilangs
@@ -10,11 +10,21 @@ tags:
10
  - n-gram
11
  - markov
12
  - wikipedia
 
 
 
 
 
 
 
 
 
 
13
  - monolingual
14
  - family-austronesian_oceanic_other
15
  license: mit
16
  library_name: wikilangs
17
- pipeline_tag: feature-extraction
18
  datasets:
19
  - omarkamali/wikipedia-monthly
20
  dataset_info:
@@ -23,20 +33,20 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.243
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.0518
30
  - name: vocabulary_size
31
  type: vocab
32
  value: 0
33
  generated: 2026-01-03
34
  ---
35
 
36
- # CH - Wikilangs Models
37
  ## Comprehensive Research Report & Full Ablation Study
38
 
39
- This repository contains NLP models trained and evaluated by Wikilangs, specifically on **CH** Wikipedia data.
40
  We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
41
 
42
  ## 📋 Repository Contents
@@ -60,7 +70,7 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
- - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
  - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
@@ -80,39 +90,39 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
- | **8k** | 3.977x | 3.99 | 0.1019% | 38,272 |
84
- | **16k** | 4.243x 🏆 | 4.26 | 0.1087% | 35,871 |
85
 
86
  ### Tokenization Examples
87
 
88
  Below are sample sentences tokenized with each vocabulary size:
89
 
90
- **Sample 1:** `Doerun, nasong-song gi Estados Unidos. Guåha 774 na tataogues na populasion i se...`
91
 
92
  | Vocab | Tokens | Count |
93
  |-------|--------|-------|
94
- | 8k | `▁do er un , nasong - song ▁gi ▁estadosunidos ... (+16 more)` | 26 |
95
- | 16k | `▁doerun ,nasong - song ▁gi ▁estadosunidos .guåha ... (+14 more)` | 24 |
96
 
97
- **Sample 2:** `Newhalen, nasong-song gi Estados Unidos. Guåha 190 na tataogues na populasion i ...`
98
 
99
  | Vocab | Tokens | Count |
100
  |-------|--------|-------|
101
- | 8k | `▁newha len , ▁nasong - song ▁gi ▁estados ▁unidos . ... (+15 more)` | 25 |
102
- | 16k | `▁newhalen , ▁nasong - song ▁gi ▁estados ▁unidos . ▁guåha ... (+14 more)` | 24 |
103
 
104
- **Sample 3:** `Larsen Bay, nasong-song gi Estados Unidos. Guåha 87 na tataogues na populasion i...`
105
 
106
  | Vocab | Tokens | Count |
107
  |-------|--------|-------|
108
- | 8k | `▁larsen ▁bay , ▁nasong - song ▁gi ▁estados ▁unidos . ... (+14 more)` | 24 |
109
- | 16k | `▁larsen ▁bay , ▁nasong - song ▁gi ▁estados ▁unidos . ... (+14 more)` | 24 |
110
 
111
 
112
  ### Key Findings
113
 
114
- - **Best Compression:** 16k achieves 4.243x compression
115
- - **Lowest UNK Rate:** 8k with 0.1019% unknown tokens
116
  - **Trade-off:** Larger vocabularies improve compression but increase model size
117
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
118
 
@@ -129,12 +139,14 @@ Below are sample sentences tokenized with each vocabulary size:
129
 
130
  | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
131
  |--------|---------|------------|---------|----------------|------------------|-------------------|
132
- | **2-gram** | Word | 181 | 7.50 | 496 | 68.1% | 100.0% |
133
- | **2-gram** | Subword | 228 | 7.83 | 869 | 71.1% | 100.0% |
134
- | **3-gram** | Word | 134 🏆 | 7.07 | 582 | 70.7% | 100.0% |
135
- | **3-gram** | Subword | 1,281 | 10.32 | 4,543 | 36.5% | 79.7% |
136
- | **4-gram** | Word | 158 | 7.30 | 842 | 66.6% | 100.0% |
137
- | **4-gram** | Subword | 3,667 | 11.84 | 12,416 | 26.2% | 57.0% |
 
 
138
 
139
  ### Top 5 N-grams by Size
140
 
@@ -153,8 +165,8 @@ Below are sample sentences tokenized with each vocabulary size:
153
  | Rank | N-gram | Count |
154
  |------|--------|-------|
155
  | 1 | `nu i senso` | 308 |
156
- | 2 | `na tataogues na` | 304 |
157
- | 3 | `na populasion i` | 304 |
158
  | 4 | `tataogues na populasion` | 304 |
159
  | 5 | `i sengsong nu` | 299 |
160
 
@@ -164,46 +176,66 @@ Below are sample sentences tokenized with each vocabulary size:
164
  |------|--------|-------|
165
  | 1 | `na tataogues na populasion` | 304 |
166
  | 2 | `tataogues na populasion i` | 303 |
167
- | 3 | `na populasion i sengsong` | 299 |
168
- | 4 | `sengsong nu i senso` | 299 |
169
  | 5 | `populasion i sengsong nu` | 299 |
170
 
 
 
 
 
 
 
 
 
 
 
171
  **2-grams (Subword):**
172
 
173
  | Rank | N-gram | Count |
174
  |------|--------|-------|
175
- | 1 | `a _` | 4,934 |
176
- | 2 | `i _` | 4,206 |
177
- | 3 | `n a` | 2,921 |
178
- | 4 | `a n` | 2,812 |
179
- | 5 | `_ i` | 2,769 |
180
 
181
  **3-grams (Subword):**
182
 
183
  | Rank | N-gram | Count |
184
  |------|--------|-------|
185
- | 1 | `_ i _` | 2,254 |
186
- | 2 | `_ n a` | 1,827 |
187
  | 3 | `n a _` | 1,562 |
188
- | 4 | `_ g i` | 1,306 |
189
- | 5 | `_ m a` | 1,153 |
190
 
191
  **4-grams (Subword):**
192
 
193
  | Rank | N-gram | Count |
194
  |------|--------|-------|
195
- | 1 | `_ n a _` | 1,359 |
196
- | 2 | `_ g i _` | 957 |
197
- | 3 | `s o n g` | 950 |
198
- | 4 | `_ i _ s` | 792 |
199
- | 5 | `o n g _` | 757 |
 
 
 
 
 
 
 
 
 
 
200
 
201
 
202
  ### Key Findings
203
 
204
- - **Best Perplexity:** 3-gram (word) with 134
205
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
206
- - **Coverage:** Top-1000 patterns cover ~57% of corpus
207
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
208
 
209
  ---
@@ -219,14 +251,14 @@ Below are sample sentences tokenized with each vocabulary size:
219
 
220
  | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
221
  |---------|---------|-------------|------------|------------------|-----------------|----------------|
222
- | **1** | Word | 0.4921 | 1.406 | 2.63 | 5,466 | 50.8% |
223
- | **1** | Subword | 1.0948 | 2.136 | 7.84 | 226 | 0.0% |
224
- | **2** | Word | 0.1702 | 1.125 | 1.32 | 14,200 | 83.0% |
225
- | **2** | Subword | 1.1295 | 2.188 | 5.29 | 1,769 | 0.0% |
226
- | **3** | Word | 0.0593 | 1.042 | 1.09 | 18,551 | 94.1% |
227
- | **3** | Subword | 0.7380 | 1.668 | 2.81 | 9,336 | 26.2% |
228
- | **4** | Word | 0.0213 🏆 | 1.015 | 1.03 | 19,968 | 97.9% |
229
- | **4** | Subword | 0.3911 | 1.311 | 1.72 | 26,134 | 60.9% |
230
 
231
  ### Generated Text Samples (Word-based)
232
 
@@ -234,27 +266,27 @@ Below are text samples generated from each word-based Markov chain model:
234
 
235
  **Context Size 1:**
236
 
237
- 1. `i islan guåhan si nanå ña ti ha an i senso ine giya guåhan i`
238
- 2. `na tataogues na petsona siha manma å ñao i kotturan ñiha gi i dos botkan ni`
239
- 3. `gi sankattan na populasion i mina tres manggimen tuba ginen i taotao ya ma ganna i`
240
 
241
  **Context Size 2:**
242
 
243
  1. `i sengsong nu i senso unidos`
244
- 2. `nu i proteksion i tano jesukristo gi fecha ni 25 disiembre`
245
- 3. `populasion i sengsong nu i senso unidos`
246
 
247
  **Context Size 3:**
248
 
249
- 1. `na populasion i sengsong nu i senso unidos`
250
- 2. `na tataogues na populasion i sengsong nu i senso unidos`
251
- 3. `tataogues na populasion i sengsong nu i senso unidos`
252
 
253
  **Context Size 4:**
254
 
255
  1. `na tataogues na populasion i sengsong nu i senso unidos`
256
  2. `tataogues na populasion i sengsong nu i senso unidos`
257
- 3. `i sengsong nu i senso unidos`
258
 
259
 
260
  ### Generated Text Samples (Subword-based)
@@ -263,34 +295,34 @@ Below are text samples generated from each subword-based Markov chain model:
263
 
264
  **Context Size 1:**
265
 
266
- 1. `_eyanso'_giterge`
267
- 2. `asipa_dinakoso_m`
268
- 3. `nsi_nandiki_u_pa`
269
 
270
  **Context Size 2:**
271
 
272
- 1. `a_åchokkas_na_kri`
273
- 2. `i_ta_magu_gi_i_ha`
274
- 3. `na'neho_"thunidos`
275
 
276
  **Context Size 3:**
277
 
278
- 1. `_i_caste_pies_dang`
279
- 2. `_na_populasifiku)_`
280
- 3. `na_tan_atten-ñiha_`
281
 
282
  **Context Size 4:**
283
 
284
- 1. `_na_po'lu_na_aterit`
285
- 2. `_gi_estorio_ni'_kad`
286
- 3. `song-song_nu_i_akti`
287
 
288
 
289
  ### Key Findings
290
 
291
  - **Best Predictability:** Context-4 (word) with 97.9% predictability
292
  - **Branching Factor:** Decreases with context size (more deterministic)
293
- - **Memory Trade-off:** Larger contexts require more storage (26,134 contexts)
294
  - **Recommendation:** Context-3 or Context-4 for text generation
295
 
296
  ---
@@ -306,64 +338,64 @@ Below are text samples generated from each subword-based Markov chain model:
306
 
307
  | Metric | Value |
308
  |--------|-------|
309
- | Vocabulary Size | 1,918 |
310
- | Total Tokens | 22,697 |
311
- | Mean Frequency | 11.83 |
312
  | Median Frequency | 3 |
313
- | Frequency Std Dev | 73.74 |
314
 
315
  ### Most Common Words
316
 
317
  | Rank | Word | Frequency |
318
  |------|------|-----------|
319
- | 1 | i | 2,327 |
320
- | 2 | na | 1,513 |
321
- | 3 | gi | 972 |
322
  | 4 | unidos | 448 |
323
  | 5 | yan | 436 |
324
  | 6 | sengsong | 370 |
325
  | 7 | guåha | 356 |
326
- | 8 | ni | 339 |
327
- | 9 | nu | 335 |
328
- | 10 | populasion | 333 |
329
 
330
  ### Least Common Words (from vocabulary)
331
 
332
  | Rank | Word | Frequency |
333
  |------|------|-----------|
334
- | 1 | av | 2 |
335
- | 2 | berit | 2 |
336
- | 3 | larsson | 2 |
337
- | 4 | hemliga | 2 |
338
- | 5 | tycker | 2 |
339
- | 6 | att | 2 |
340
- | 7 | var | 2 |
341
- | 8 | rolig | 2 |
342
- | 9 | ett | 2 |
343
- | 10 | du | 2 |
344
 
345
  ### Zipf's Law Analysis
346
 
347
  | Metric | Value |
348
  |--------|-------|
349
- | Zipf Coefficient | 0.9581 |
350
- | R² (Goodness of Fit) | 0.986461 |
351
  | Adherence Quality | **excellent** |
352
 
353
  ### Coverage Analysis
354
 
355
  | Top N Words | Coverage |
356
  |-------------|----------|
357
- | Top 100 | 63.1% |
358
  | Top 1,000 | 91.3% |
359
  | Top 5,000 | 0.0% |
360
  | Top 10,000 | 0.0% |
361
 
362
  ### Key Findings
363
 
364
- - **Zipf Compliance:** R²=0.9865 indicates excellent adherence to Zipf's law
365
- - **High Frequency Dominance:** Top 100 words cover 63.1% of corpus
366
- - **Long Tail:** -8,082 words needed for remaining 100.0% coverage
367
 
368
  ---
369
  ## 5. Word Embeddings Evaluation
@@ -379,37 +411,40 @@ Below are text samples generated from each subword-based Markov chain model:
379
 
380
  ### 5.1 Cross-Lingual Alignment
381
 
382
- > *Note: Multilingual alignment visualization not available for this language.*
 
 
383
 
384
 
385
  ### 5.2 Model Comparison
386
 
387
  | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
388
  |-------|-----------|----------|------------------|---------------|----------------|
389
- | **mono_32d** | 32 | 0.0518 🏆 | 0.6801 | N/A | N/A |
390
- | **mono_64d** | 64 | 0.0071 | 0.8792 | N/A | N/A |
391
- | **mono_128d** | 128 | 0.0017 | 0.8741 | N/A | N/A |
 
 
 
392
 
393
  ### Key Findings
394
 
395
- - **Best Isotropy:** mono_32d with 0.0518 (more uniform distribution)
396
- - **Semantic Density:** Average pairwise similarity of 0.8111. Lower values indicate better semantic separation.
397
- - **Alignment Quality:** No aligned models evaluated in this run.
398
  - **Recommendation:** 128d aligned for best cross-lingual performance
399
 
400
  ---
401
  ## 6. Morphological Analysis (Experimental)
402
 
403
- > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
404
-
405
  This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
406
 
407
  ### 6.1 Productivity & Complexity
408
 
409
  | Metric | Value | Interpretation | Recommendation |
410
  |--------|-------|----------------|----------------|
411
- | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
412
- | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
413
 
414
  ### 6.2 Affix Inventory (Productive Units)
415
 
@@ -418,17 +453,17 @@ These are the most productive prefixes and suffixes identified by sampling the v
418
  #### Productive Prefixes
419
  | Prefix | Examples |
420
  |--------|----------|
421
- | `-ma` | matematika, matai, mansen |
422
 
423
  #### Productive Suffixes
424
  | Suffix | Examples |
425
  |--------|----------|
426
- | `-a` | matematika, kana, bånda |
427
- | `-n` | sanhayan, monhayan, fan |
428
- | `-on` | organisasion, aplikasion, adelanton |
429
- | `-an` | sanhayan, monhayan, fan |
430
- | `-ia` | bibliografia, termania, indonesia |
431
- | `-ion` | organisasion, aplikasion, atministrasion |
432
 
433
  ### 6.3 Bound Stems (Lexical Roots)
434
 
@@ -443,10 +478,10 @@ This table shows which prefixes and suffixes most frequently co-occur on the sam
443
 
444
  | Prefix | Suffix | Frequency | Examples |
445
  |--------|--------|-----------|----------|
446
- | `-ma` | `-a` | 17 words | matematika, manfa |
447
- | `-ma` | `-n` | 13 words | mansen, mandarin |
448
- | `-ma` | `-an` | 6 words | manguayan, man |
449
- | `-ma` | `-on` | 4 words | matutuhon, madison |
450
  | `-ma` | `-ia` | 1 words | malaysia, maria |
451
 
452
  ### 6.5 Recursive Morpheme Segmentation
@@ -456,25 +491,27 @@ Using **Recursive Hierarchical Substitutability**, we decompose complex words in
456
  | Word | Suggested Split | Confidence | Stem |
457
  |------|-----------------|------------|------|
458
  | makonsidera | **`ma-konsidera`** | 4.5 | `konsidera` |
 
459
  | matutuhon | **`ma-tutuh-on`** | 3.0 | `tutuh` |
460
- | manguayan | **`ma-nguay-an`** | 3.0 | `nguay` |
461
  | pennsylvania | **`pennsylv-an-ia`** | 3.0 | `pennsylv` |
462
- | manmatutuhon | **`ma-nmatutuh-on`** | 3.0 | `nmatutuh` |
463
- | machulijan | **`ma-chulij-an`** | 3.0 | `chulij` |
464
  | manofisinan | **`ma-nofisin-an`** | 3.0 | `nofisin` |
465
- | masasangan | **`ma-sasang-an`** | 3.0 | `sasang` |
466
- | matematika | **`ma-tematika`** | 1.5 | `tematika` |
467
- | organisasion | **`organisas-ion`** | 1.5 | `organisas` |
468
- | aplikasion | **`aplikas-ion`** | 1.5 | `aplikas` |
469
- | adelanton | **`adelant-on`** | 1.5 | `adelant` |
470
- | bibliografia | **`bibliograf-ia`** | 1.5 | `bibliograf` |
471
  | manamerikanu | **`ma-namerikanu`** | 1.5 | `namerikanu` |
472
- | indonesia | **`indones-ia`** | 1.5 | `indones` |
 
 
 
 
 
473
 
474
  ### 6.6 Linguistic Interpretation
475
 
476
  > **Automated Insight:**
477
- The language CH appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
 
 
478
 
479
  ---
480
  ## 7. Summary & Recommendations
@@ -485,8 +522,8 @@ The language CH appears to be more isolating or has a highly fixed vocabulary. W
485
 
486
  | Component | Recommended | Rationale |
487
  |-----------|-------------|-----------|
488
- | Tokenizer | **16k BPE** | Best compression (4.24x) |
489
- | N-gram | **3-gram** | Lowest perplexity (134) |
490
  | Markov | **Context-4** | Highest predictability (97.9%) |
491
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
492
 
@@ -701,4 +738,4 @@ MIT License - Free for academic and commercial use.
701
  ---
702
  *Generated by Wikilangs Models Pipeline*
703
 
704
- *Report Date: 2026-01-03 10:06:23*
 
1
  ---
2
  language: ch
3
+ language_name: Chamorro
4
  language_family: austronesian_oceanic_other
5
  tags:
6
  - wikilangs
 
10
  - n-gram
11
  - markov
12
  - wikipedia
13
+ - feature-extraction
14
+ - sentence-similarity
15
+ - tokenization
16
+ - n-grams
17
+ - markov-chain
18
+ - text-mining
19
+ - fasttext
20
+ - babelvec
21
+ - vocabulous
22
+ - vocabulary
23
  - monolingual
24
  - family-austronesian_oceanic_other
25
  license: mit
26
  library_name: wikilangs
27
+ pipeline_tag: text-generation
28
  datasets:
29
  - omarkamali/wikipedia-monthly
30
  dataset_info:
 
33
  metrics:
34
  - name: best_compression_ratio
35
  type: compression
36
+ value: 4.248
37
  - name: best_isotropy
38
  type: isotropy
39
+ value: 0.0563
40
  - name: vocabulary_size
41
  type: vocab
42
  value: 0
43
  generated: 2026-01-03
44
  ---
45
 
46
+ # Chamorro - Wikilangs Models
47
  ## Comprehensive Research Report & Full Ablation Study
48
 
49
+ This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Chamorro** Wikipedia data.
50
  We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
51
 
52
  ## 📋 Repository Contents
 
70
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
71
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
72
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
73
+ - [6. Morphological Analysis (Experimental)](#6--morphological-analysis-experimental)
74
  - [7. Summary & Recommendations](#7-summary--recommendations)
75
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
76
  - [Visualizations Index](#visualizations-index)
 
90
 
91
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
92
  |------------|-------------|---------------|----------|--------------|
93
+ | **8k** | 3.977x | 3.99 | 0.0998% | 38,069 |
94
+ | **16k** | 4.248x 🏆 | 4.26 | 0.1066% | 35,644 |
95
 
96
  ### Tokenization Examples
97
 
98
  Below are sample sentences tokenized with each vocabulary size:
99
 
100
+ **Sample 1:** `+Afghanistan 125px Anthem: Millī سرود 300px Afghanistan capitat Kabul. Guåha na ...`
101
 
102
  | Vocab | Tokens | Count |
103
  |-------|--------|-------|
104
+ | 8k | `▁+ af ghanistan1 2 5 pxanthem : ... (+21 more)` | 31 |
105
+ | 16k | `▁+ afghanistan1 2 5 pxanthem :millī ... (+20 more)` | 30 |
106
 
107
+ **Sample 2:** `Cartersville, nasong-song gi Estados Unidos. Guåha 19,731 na tataogues na popula...`
108
 
109
  | Vocab | Tokens | Count |
110
  |-------|--------|-------|
111
+ | 8k | `▁carters ville , ▁nasong - song ▁gi ▁estados ▁unidos . ... (+18 more)` | 28 |
112
+ | 16k | `▁cartersville , ▁nasong - song ▁gi ▁estados ▁unidos . ▁guåha ... (+17 more)` | 27 |
113
 
114
+ **Sample 3:** `Waleska, nasong-song gi Estados Unidos. Guåha 644 na tataogues na populasion i s...`
115
 
116
  | Vocab | Tokens | Count |
117
  |-------|--------|-------|
118
+ | 8k | `▁wa les ka , ▁nasong - song ▁gi ▁estados ▁unidos ... (+16 more)` | 26 |
119
+ | 16k | `▁waleska , ▁nasong - song ▁gi ▁estados ▁unidos . ▁guåha ... (+14 more)` | 24 |
120
 
121
 
122
  ### Key Findings
123
 
124
+ - **Best Compression:** 16k achieves 4.248x compression
125
+ - **Lowest UNK Rate:** 8k with 0.0998% unknown tokens
126
  - **Trade-off:** Larger vocabularies improve compression but increase model size
127
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
128
 
 
139
 
140
  | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
141
  |--------|---------|------------|---------|----------------|------------------|-------------------|
142
+ | **2-gram** | Word | 178 | 7.48 | 491 | 68.4% | 100.0% |
143
+ | **2-gram** | Subword | 227 | 7.83 | 866 | 71.1% | 100.0% |
144
+ | **3-gram** | Word | 133 | 7.06 | 577 | 70.8% | 100.0% |
145
+ | **3-gram** | Subword | 1,279 | 10.32 | 4,533 | 36.5% | 79.7% |
146
+ | **4-gram** | Word | 156 | 7.29 | 834 | 66.8% | 100.0% |
147
+ | **4-gram** | Subword | 3,664 | 11.84 | 12,412 | 26.2% | 57.0% |
148
+ | **5-gram** | Word | 102 🏆 | 6.67 | 583 | 72.6% | 100.0% |
149
+ | **5-gram** | Subword | 5,287 | 12.37 | 16,015 | 24.4% | 49.4% |
150
 
151
  ### Top 5 N-grams by Size
152
 
 
165
  | Rank | N-gram | Count |
166
  |------|--------|-------|
167
  | 1 | `nu i senso` | 308 |
168
+ | 2 | `na populasion i` | 304 |
169
+ | 3 | `na tataogues na` | 304 |
170
  | 4 | `tataogues na populasion` | 304 |
171
  | 5 | `i sengsong nu` | 299 |
172
 
 
176
  |------|--------|-------|
177
  | 1 | `na tataogues na populasion` | 304 |
178
  | 2 | `tataogues na populasion i` | 303 |
179
+ | 3 | `sengsong nu i senso` | 299 |
180
+ | 4 | `i sengsong nu i` | 299 |
181
  | 5 | `populasion i sengsong nu` | 299 |
182
 
183
+ **5-grams (Word):**
184
+
185
+ | Rank | N-gram | Count |
186
+ |------|--------|-------|
187
+ | 1 | `na tataogues na populasion i` | 303 |
188
+ | 2 | `populasion i sengsong nu i` | 299 |
189
+ | 3 | `i sengsong nu i senso` | 299 |
190
+ | 4 | `na populasion i sengsong nu` | 299 |
191
+ | 5 | `tataogues na populasion i sengsong` | 298 |
192
+
193
  **2-grams (Subword):**
194
 
195
  | Rank | N-gram | Count |
196
  |------|--------|-------|
197
+ | 1 | `a _` | 4,908 |
198
+ | 2 | `i _` | 4,194 |
199
+ | 3 | `n a` | 2,916 |
200
+ | 4 | `a n` | 2,801 |
201
+ | 5 | `_ i` | 2,765 |
202
 
203
  **3-grams (Subword):**
204
 
205
  | Rank | N-gram | Count |
206
  |------|--------|-------|
207
+ | 1 | `_ i _` | 2,248 |
208
+ | 2 | `_ n a` | 1,823 |
209
  | 3 | `n a _` | 1,562 |
210
+ | 4 | `_ g i` | 1,298 |
211
+ | 5 | `_ m a` | 1,144 |
212
 
213
  **4-grams (Subword):**
214
 
215
  | Rank | N-gram | Count |
216
  |------|--------|-------|
217
+ | 1 | `_ n a _` | 1,357 |
218
+ | 2 | `_ g i _` | 959 |
219
+ | 3 | `s o n g` | 952 |
220
+ | 4 | `_ i _ s` | 793 |
221
+ | 5 | `o n g _` | 758 |
222
+
223
+ **5-grams (Subword):**
224
+
225
+ | Rank | N-gram | Count |
226
+ |------|--------|-------|
227
+ | 1 | `_ i _ s e` | 690 |
228
+ | 2 | `i _ s e n` | 687 |
229
+ | 3 | `s o n g _` | 653 |
230
+ | 4 | `_ u n i d` | 463 |
231
+ | 5 | `u n i d o` | 448 |
232
 
233
 
234
  ### Key Findings
235
 
236
+ - **Best Perplexity:** 5-gram (word) with 102
237
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
238
+ - **Coverage:** Top-1000 patterns cover ~49% of corpus
239
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
240
 
241
  ---
 
251
 
252
  | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
253
  |---------|---------|-------------|------------|------------------|-----------------|----------------|
254
+ | **1** | Word | 0.4903 | 1.405 | 2.61 | 5,477 | 51.0% |
255
+ | **1** | Subword | 1.0984 | 2.141 | 7.88 | 223 | 0.0% |
256
+ | **2** | Word | 0.1693 | 1.125 | 1.32 | 14,138 | 83.1% |
257
+ | **2** | Subword | 1.1342 | 2.195 | 5.32 | 1,755 | 0.0% |
258
+ | **3** | Word | 0.0592 | 1.042 | 1.09 | 18,443 | 94.1% |
259
+ | **3** | Subword | 0.7400 | 1.670 | 2.81 | 9,321 | 26.0% |
260
+ | **4** | Word | 0.0211 🏆 | 1.015 | 1.03 | 19,853 | 97.9% |
261
+ | **4** | Subword | 0.3920 | 1.312 | 1.72 | 26,122 | 60.8% |
262
 
263
  ### Generated Text Samples (Word-based)
264
 
 
266
 
267
  **Context Size 1:**
268
 
269
+ 1. `i saddok segua ya siha gi i mayot maelihi gobietna i mundo ma li e società`
270
+ 2. `na populasion i senso unidos guåha 296 na agronomia i senso bibliografia riferensia horst lehne and`
271
+ 3. `gi i sengsong nu i patgon siha ma usa ginen i dos gi islan sumatra pekanbaru`
272
 
273
  **Context Size 2:**
274
 
275
  1. `i sengsong nu i senso unidos`
276
+ 2. `nu i senso para i fondo gaige hålom hånom hao kalan guihan gue gi iya estados unidos`
277
+ 3. `na populasion i sengsong nu i senso unidos`
278
 
279
  **Context Size 3:**
280
 
281
+ 1. `na tataogues na populasion i sengsong nu i senso unidos`
282
+ 2. `na populasion i sengsong nu i senso website sanhiyong siha rome`
283
+ 3. `tataogues na populasion i sengsong nu i senso yeet website sanhiyong siha commons coronel fabriciano`
284
 
285
  **Context Size 4:**
286
 
287
  1. `na tataogues na populasion i sengsong nu i senso unidos`
288
  2. `tataogues na populasion i sengsong nu i senso unidos`
289
+ 3. `na populasion i sengsong nu i senso unidos`
290
 
291
 
292
  ### Generated Text Samples (Subword-based)
 
295
 
296
  **Context Size 1:**
297
 
298
+ 1. `_yia_a_mesotinio`
299
+ 2. `a_dorn._ikug._s_`
300
+ 3. `nusot_fai_i_i_gs`
301
 
302
  **Context Size 2:**
303
 
304
+ 1. `a_para_ediu_nasto`
305
+ 2. `i_me":_ki,_vícite`
306
+ 3. `na'i_achamane_pås`
307
 
308
  **Context Size 3:**
309
 
310
+ 1. `_i_semak_senggen_c`
311
+ 2. `_na_pat_gi_wikike'`
312
+ 3. `na_taogues_na_gi_k`
313
 
314
  **Context Size 4:**
315
 
316
+ 1. `_na_populasion_yan_`
317
+ 2. `_gi_para_u_matungo'`
318
+ 3. `song_nu_i_sengsong_`
319
 
320
 
321
  ### Key Findings
322
 
323
  - **Best Predictability:** Context-4 (word) with 97.9% predictability
324
  - **Branching Factor:** Decreases with context size (more deterministic)
325
+ - **Memory Trade-off:** Larger contexts require more storage (26,122 contexts)
326
  - **Recommendation:** Context-3 or Context-4 for text generation
327
 
328
  ---
 
338
 
339
  | Metric | Value |
340
  |--------|-------|
341
+ | Vocabulary Size | 1,919 |
342
+ | Total Tokens | 22,562 |
343
+ | Mean Frequency | 11.76 |
344
  | Median Frequency | 3 |
345
+ | Frequency Std Dev | 73.53 |
346
 
347
  ### Most Common Words
348
 
349
  | Rank | Word | Frequency |
350
  |------|------|-----------|
351
+ | 1 | i | 2,319 |
352
+ | 2 | na | 1,511 |
353
+ | 3 | gi | 974 |
354
  | 4 | unidos | 448 |
355
  | 5 | yan | 436 |
356
  | 6 | sengsong | 370 |
357
  | 7 | guåha | 356 |
358
+ | 8 | nu | 335 |
359
+ | 9 | ni | 334 |
360
+ | 10 | populasion | 331 |
361
 
362
  ### Least Common Words (from vocabulary)
363
 
364
  | Rank | Word | Frequency |
365
  |------|------|-----------|
366
+ | 1 | säger | 2 |
367
+ | 2 | ett | 2 |
368
+ | 3 | | 2 |
369
+ | 4 | du | 2 |
370
+ | 5 | skate | 2 |
371
+ | 6 | med | 2 |
372
+ | 7 | smaskiga | 2 |
373
+ | 8 | löken | 2 |
374
+ | 9 | tychy | 2 |
375
+ | 10 | museon | 2 |
376
 
377
  ### Zipf's Law Analysis
378
 
379
  | Metric | Value |
380
  |--------|-------|
381
+ | Zipf Coefficient | 0.9547 |
382
+ | R² (Goodness of Fit) | 0.986088 |
383
  | Adherence Quality | **excellent** |
384
 
385
  ### Coverage Analysis
386
 
387
  | Top N Words | Coverage |
388
  |-------------|----------|
389
+ | Top 100 | 63.2% |
390
  | Top 1,000 | 91.3% |
391
  | Top 5,000 | 0.0% |
392
  | Top 10,000 | 0.0% |
393
 
394
  ### Key Findings
395
 
396
+ - **Zipf Compliance:** R²=0.9861 indicates excellent adherence to Zipf's law
397
+ - **High Frequency Dominance:** Top 100 words cover 63.2% of corpus
398
+ - **Long Tail:** -8,081 words needed for remaining 100.0% coverage
399
 
400
  ---
401
  ## 5. Word Embeddings Evaluation
 
411
 
412
  ### 5.1 Cross-Lingual Alignment
413
 
414
+ ![Alignment Quality](visualizations/embedding_alignment_quality.png)
415
+
416
+ ![Multilingual t-SNE](visualizations/embedding_tsne_multilingual.png)
417
 
418
 
419
  ### 5.2 Model Comparison
420
 
421
  | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
422
  |-------|-----------|----------|------------------|---------------|----------------|
423
+ | **mono_32d** | 32 | 0.0563 🏆 | 0.6662 | N/A | N/A |
424
+ | **mono_64d** | 64 | 0.0067 | 0.8730 | N/A | N/A |
425
+ | **mono_128d** | 128 | 0.0017 | 0.8734 | N/A | N/A |
426
+ | **aligned_32d** | 32 | 0.0563 | 0.6862 | 0.0332 | 0.1848 |
427
+ | **aligned_64d** | 64 | 0.0067 | 0.8793 | 0.0095 | 0.1090 |
428
+ | **aligned_128d** | 128 | 0.0017 | 0.8561 | 0.0047 | 0.0853 |
429
 
430
  ### Key Findings
431
 
432
+ - **Best Isotropy:** mono_32d with 0.0563 (more uniform distribution)
433
+ - **Semantic Density:** Average pairwise similarity of 0.8057. Lower values indicate better semantic separation.
434
+ - **Alignment Quality:** Aligned models achieve up to 3.3% R@1 in cross-lingual retrieval.
435
  - **Recommendation:** 128d aligned for best cross-lingual performance
436
 
437
  ---
438
  ## 6. Morphological Analysis (Experimental)
439
 
 
 
440
  This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
441
 
442
  ### 6.1 Productivity & Complexity
443
 
444
  | Metric | Value | Interpretation | Recommendation |
445
  |--------|-------|----------------|----------------|
446
+ | Productivity Index | **3.506** | High morphological productivity | Reliable analysis |
447
+ | Idiomaticity Gap | **1.025** | High formulaic/idiomatic content | - |
448
 
449
  ### 6.2 Affix Inventory (Productive Units)
450
 
 
453
  #### Productive Prefixes
454
  | Prefix | Examples |
455
  |--------|----------|
456
+ | `-ma` | manamerikanu, maisang, manmafa |
457
 
458
  #### Productive Suffixes
459
  | Suffix | Examples |
460
  |--------|----------|
461
+ | `-a` | sina, finta, nangga |
462
+ | `-n` | ayman, guguan, direchon |
463
+ | `-on` | direchon, mision, museon |
464
+ | `-an` | ayman, guguan, geran |
465
+ | `-ia` | iglesia, cecilia, diktionaria |
466
+ | `-ion` | mision, administration, nasion |
467
 
468
  ### 6.3 Bound Stems (Lexical Roots)
469
 
 
478
 
479
  | Prefix | Suffix | Frequency | Examples |
480
  |--------|--------|-----------|----------|
481
+ | `-ma` | `-a` | 17 words | manmafa, mafana |
482
+ | `-ma` | `-n` | 13 words | mangginen, manmatutuhon |
483
+ | `-ma` | `-an` | 6 words | masasangan, maneran |
484
+ | `-ma` | `-on` | 4 words | manmatutuhon, matutuhon |
485
  | `-ma` | `-ia` | 1 words | malaysia, maria |
486
 
487
  ### 6.5 Recursive Morpheme Segmentation
 
491
  | Word | Suggested Split | Confidence | Stem |
492
  |------|-----------------|------------|------|
493
  | makonsidera | **`ma-konsidera`** | 4.5 | `konsidera` |
494
+ | manmatutuhon | **`ma-nmatutuh-on`** | 3.0 | `nmatutuh` |
495
  | matutuhon | **`ma-tutuh-on`** | 3.0 | `tutuh` |
496
+ | masasangan | **`ma-sasang-an`** | 3.0 | `sasang` |
497
  | pennsylvania | **`pennsylv-an-ia`** | 3.0 | `pennsylv` |
 
 
498
  | manofisinan | **`ma-nofisin-an`** | 3.0 | `nofisin` |
499
+ | manguayan | **`ma-nguay-an`** | 3.0 | `nguay` |
500
+ | machulijan | **`ma-chulij-an`** | 3.0 | `chulij` |
 
 
 
 
501
  | manamerikanu | **`ma-namerikanu`** | 1.5 | `namerikanu` |
502
+ | diktionaria | **`diktionar-ia`** | 1.5 | `diktionar` |
503
+ | administration | **`administrat-ion`** | 1.5 | `administrat` |
504
+ | misionarion | **`misionar-ion`** | 1.5 | `misionar` |
505
+ | mangginen | **`ma-ngginen`** | 1.5 | `ngginen` |
506
+ | toneladan | **`tonelad-an`** | 1.5 | `tonelad` |
507
+ | wikimedia | **`wikimed-ia`** | 1.5 | `wikimed` |
508
 
509
  ### 6.6 Linguistic Interpretation
510
 
511
  > **Automated Insight:**
512
+ The language Chamorro shows high morphological productivity. The subword models are significantly more efficient than word models, suggesting a rich system of affixation or compounding.
513
+
514
+ > **Note on Idiomaticity:** The high Idiomaticity Gap suggests a large number of frequent multi-word expressions or formulaic sequences that are statistically distinct from their component parts.
515
 
516
  ---
517
  ## 7. Summary & Recommendations
 
522
 
523
  | Component | Recommended | Rationale |
524
  |-----------|-------------|-----------|
525
+ | Tokenizer | **16k BPE** | Best compression (4.25x) |
526
+ | N-gram | **5-gram** | Lowest perplexity (102) |
527
  | Markov | **Context-4** | Highest predictability (97.9%) |
528
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
529
 
 
738
  ---
739
  *Generated by Wikilangs Models Pipeline*
740
 
741
+ *Report Date: 2026-01-03 20:18:48*
models/embeddings/aligned/ch_128d.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c101c6b6b1879eb87cc2d607ecdf473db825b5ff529674fab69c62998c083fe
3
+ size 1024531495
models/embeddings/aligned/ch_128d.meta.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"lang": "ch", "dim": 128, "max_seq_len": 512, "is_aligned": true}
models/embeddings/aligned/ch_128d.projection.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e6551433039f7f6017e26301151ff5ce6ef6bb5af9ee9a3ef6361945fda8609
3
+ size 65664
models/embeddings/aligned/ch_128d_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "language": "ch",
3
+ "dimension": 128,
4
+ "version": "aligned",
5
+ "hub_language": "en",
6
+ "seed_vocab_size": 211,
7
+ "vocab_size": 511
8
+ }
models/embeddings/aligned/ch_32d.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e584860b58a7721475d5055a783bbe800416c805994e8a64fd732ee68bba136e
3
+ size 256139047
models/embeddings/aligned/ch_32d.meta.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"lang": "ch", "dim": 32, "max_seq_len": 512, "is_aligned": true}
models/embeddings/aligned/ch_32d.projection.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59e52077ca54e97043ac5ba55c6408418b379f87be81a6105ff4d5d066c60c7d
3
+ size 4224
models/embeddings/aligned/ch_32d_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "language": "ch",
3
+ "dimension": 32,
4
+ "version": "aligned",
5
+ "hub_language": "en",
6
+ "seed_vocab_size": 211,
7
+ "vocab_size": 511
8
+ }
models/embeddings/aligned/ch_64d.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0507af78a2b20fc4f03d79877e45dd543f3b031b8c6e976e439754e45c8cb54
3
+ size 512269863
models/embeddings/aligned/ch_64d.meta.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"lang": "ch", "dim": 64, "max_seq_len": 512, "is_aligned": true}
models/embeddings/aligned/ch_64d.projection.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:981b1cfca3191dda47c139109f2b8557af78006455e1d6c19a9bb02bf8ba66e2
3
+ size 16512
models/embeddings/aligned/ch_64d_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "language": "ch",
3
+ "dimension": 64,
4
+ "version": "aligned",
5
+ "hub_language": "en",
6
+ "seed_vocab_size": 211,
7
+ "vocab_size": 511
8
+ }
models/embeddings/monolingual/ch_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0df9cc84b978b1e115c6cee71e84a836b069e938ed89ba0990d33fb6e76d469f
3
- size 1024535659
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c101c6b6b1879eb87cc2d607ecdf473db825b5ff529674fab69c62998c083fe
3
+ size 1024531495
models/embeddings/monolingual/ch_128d_metadata.json CHANGED
@@ -11,5 +11,5 @@
11
  "encoding_method": "rope",
12
  "dim": 128
13
  },
14
- "vocab_size": 515
15
  }
 
11
  "encoding_method": "rope",
12
  "dim": 128
13
  },
14
+ "vocab_size": 511
15
  }
models/embeddings/monolingual/ch_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ddf21da0a6cf2c7c20309473f6133a07aa8e95d376e57c53cba79dae0c66d67
3
- size 256140139
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e584860b58a7721475d5055a783bbe800416c805994e8a64fd732ee68bba136e
3
+ size 256139047
models/embeddings/monolingual/ch_32d_metadata.json CHANGED
@@ -11,5 +11,5 @@
11
  "encoding_method": "rope",
12
  "dim": 32
13
  },
14
- "vocab_size": 515
15
  }
 
11
  "encoding_method": "rope",
12
  "dim": 32
13
  },
14
+ "vocab_size": 511
15
  }
models/embeddings/monolingual/ch_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:765adc5bcc8f79ac3c37ad1e8f88947431aa2cfc778fa7b216fa71374b23bb88
3
- size 512271979
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0507af78a2b20fc4f03d79877e45dd543f3b031b8c6e976e439754e45c8cb54
3
+ size 512269863
models/embeddings/monolingual/ch_64d_metadata.json CHANGED
@@ -11,5 +11,5 @@
11
  "encoding_method": "rope",
12
  "dim": 64
13
  },
14
- "vocab_size": 515
15
  }
 
11
  "encoding_method": "rope",
12
  "dim": 64
13
  },
14
+ "vocab_size": 511
15
  }
models/subword_markov/ch_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6da9b7f5e8c16da9d614a2d3c0d921e4219da4002fa71393d3c02e1a9f995e1
3
- size 17474
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b928eb39259e2cc403c3a4c29796bd796bddd20ead3a2be0219b7f8eb57bab9e
3
+ size 17095
models/subword_markov/ch_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_contexts": 226,
6
- "total_transitions": 151624
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_contexts": 223,
6
+ "total_transitions": 150847
7
  }
models/subword_markov/ch_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:264f038954ff6478301dcba734b78924744419ec3051afb803ea4a1d438a05f0
3
- size 67586
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a36d550286c9422c2ffb82e99c3b81ca35bb4735060577a35057479732a4a101
3
+ size 67152
models/subword_markov/ch_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_contexts": 1769,
6
- "total_transitions": 151064
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_contexts": 1755,
6
+ "total_transitions": 150287
7
  }
models/subword_markov/ch_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:84fdaafcb5e52af0cd9e249ff651ea983749cc925557218121f0d7a85653c797
3
- size 194392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1820466b4f1fdc1254d29c47d12e951b540c0b83db5d9509ae8f3d38d1cb77a9
3
+ size 186375
models/subword_markov/ch_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_contexts": 9336,
6
- "total_transitions": 150504
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_contexts": 9321,
6
+ "total_transitions": 149727
7
  }
models/subword_markov/ch_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d71078f4f1468532db5ba657998c8efdbdd211b304e67e2ca77d88635a671f7
3
- size 397638
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ff258330d3bfed1d6305ca0cd26a119b91efa1ab927068fb949b7f8fdad2308
3
+ size 397669
models/subword_markov/ch_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_contexts": 26134,
6
- "total_transitions": 149944
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_contexts": 26122,
6
+ "total_transitions": 149167
7
  }
models/subword_ngram/ch_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:17c2d47a218e898e3bac6e60fc2de2f9adcc53c160b68806900ec23c9eefabc1
3
- size 12108
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fea02781e763f863492e64e5a0b89a54336658c443940efdf0e25624d5b9289
3
+ size 12093
models/subword_ngram/ch_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_ngrams": 869,
6
- "total_ngrams": 151624
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_ngrams": 866,
6
+ "total_ngrams": 150847
7
  }
models/subword_ngram/ch_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:baced867ea20a6cb5b359ec97882d2ae2f822b1ffec9ac4b650656fab50d1c45
3
- size 49528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61694ad1cdea12232761738196a224e6751d5f34a7e80423857130c12a055dcc
3
+ size 49365
models/subword_ngram/ch_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_ngrams": 4543,
6
- "total_ngrams": 151064
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_ngrams": 4533,
6
+ "total_ngrams": 150287
7
  }
models/subword_ngram/ch_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:73fe928791c8492066a094c970ee0eadde56b39c4b8aba9c2aa2e34db5083a0a
3
- size 140036
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be94aefdd940455d1499e98b4b30528f4cfce5c7dcc59fd67270390f3a22367e
3
+ size 140083
models/subword_ngram/ch_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ch",
5
- "unique_ngrams": 12416,
6
- "total_ngrams": 150504
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ch",
5
+ "unique_ngrams": 12412,
6
+ "total_ngrams": 149727
7
  }
models/subword_ngram/ch_5gram_subword.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4df26fdf61adb7f5a6527d8e3118bb3475f8a52fee78475a65bcf244787ff96f
3
+ size 192771
models/subword_ngram/ch_5gram_subword_metadata.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 5,
3
+ "variant": "subword",
4
+ "language": "ch",
5
+ "unique_ngrams": 16015,
6
+ "total_ngrams": 149167
7
+ }
models/tokenizer/ch_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37f30794a09f73e4b923805fe5be00975376a1e1b79fcb80c07ed2173f7a5575
3
- size 494430
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18f7348ce3032df9911d6bc9f329581460c2873d248316edbcc20d8a61969e9f
3
+ size 494573
models/tokenizer/ch_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ch_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0def86c055e5d9fbbd3798691f18b2b80932bfcd504e36c5288479a36306643b
3
- size 372290
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3913743ec90e92e8a695cab7985b8463df151c9ad7feec39e173f38676900edf
3
+ size 372408
models/tokenizer/ch_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/ch_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8bd347e6037147c0f3dda9c25ece68c2d3cc7280b3681049fe2758a1b89d662
3
- size 32241
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a062c021895911666cbe242314c75f745a4095fb614139301337fcf2b7573f9a
3
+ size 31951
models/vocabulary/ch_vocabulary_metadata.json CHANGED
@@ -1,16 +1,16 @@
1
  {
2
  "language": "ch",
3
- "vocabulary_size": 1918,
4
  "variant": "full",
5
  "statistics": {
6
- "type_token_ratio": 0.20998403163257548,
7
  "coverage": {
8
- "top_100": 0.5441031100296555,
9
- "top_1000": 0.7878488327883811,
10
- "top_5000": 0.9801155805642157
11
  },
12
- "hapax_count": 3605,
13
- "hapax_ratio": 0.6527249683143219,
14
  "total_documents": 560
15
  }
16
  }
 
1
  {
2
  "language": "ch",
3
+ "vocabulary_size": 1919,
4
  "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.21143708457483382,
7
  "coverage": {
8
+ "top_100": 0.5443502177400871,
9
+ "top_1000": 0.7864619145847659,
10
+ "top_5000": 0.9795629918251967
11
  },
12
+ "hapax_count": 3616,
13
+ "hapax_ratio": 0.6532971996386631,
14
  "total_documents": 560
15
  }
16
  }
models/word_markov/ch_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ecb23d1eded033dd66aa506eb37c0ec91059dbd54574203440cf551687087a4c
3
- size 145169
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:813b3c2d204e98ae4e09c21a83183dc3a069fe3878146851b73985079929d47c
3
+ size 145123
models/word_markov/ch_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ch",
5
- "unique_contexts": 5466,
6
- "total_transitions": 25742
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ch",
5
+ "unique_contexts": 5477,
6
+ "total_transitions": 25618
7
  }
models/word_markov/ch_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dbdad6f3fbcd31c9b068772bab399ee602476f2fe71c046b10324a8b7427536b
3
- size 250356
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:870b39f5a9bee6b1de740a245fe298478f357659c606cb6d7e48ea57462dbad0
3
+ size 249550
models/word_markov/ch_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ch",
5
- "unique_contexts": 14200,
6
- "total_transitions": 25182
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ch",
5
+ "unique_contexts": 14138,
6
+ "total_transitions": 25058
7
  }
models/word_markov/ch_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:572c89e60a48ebaada6351a255615f4f80710d2cd65ea6e1704a8dd38f22a20b
3
- size 326642
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52eaac2dd01cb1f46cec217fff0c8ce7d622c13b8469c5441cb3e2afe618c86e
3
+ size 324615
models/word_markov/ch_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ch",
5
- "unique_contexts": 18551,
6
- "total_transitions": 24622
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ch",
5
+ "unique_contexts": 18443,
6
+ "total_transitions": 24498
7
  }
models/word_markov/ch_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:606b33292fc4ebbcf6b2a243c7db6ada185bf0a2de7d093a32aa9618b83732a1
3
- size 368167
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d3784f6611df82d19a91d66477462be624da43aa7eb64431493ae6b0e755baa
3
+ size 366318
models/word_markov/ch_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ch",
5
- "unique_contexts": 19968,
6
- "total_transitions": 24062
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ch",
5
+ "unique_contexts": 19853,
6
+ "total_transitions": 23938
7
  }