Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,115 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
---
|
| 4 |
+
### **Dataset Card for Emolia**
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
#### **Dataset Description**
|
| 8 |
+
|
| 9 |
+
This dataset is an enhanced version of the Emilia dataset, enriched with detailed emotion annotations. The annotations were generated using models from the EmoNet suite to provide deeper insight into the emotional content of speech. This work is based on the research and models described in the blog post "Do They See What We See?".
|
| 10 |
+
|
| 11 |
+
The annotations include 54 scores for each sample, covering a wide range of emotional and paralinguistic attributes, as well as an emotion caption generated by the BUD-E Whisper model. The goal is to enable more nuanced research and development in emotionally intelligent AI.
|
| 12 |
+
|
| 13 |
+
**Emolia** provides **timbre speaker embeddings** per sample using **Speaker-wavLM-tbr** (Orange), which focuses on **global timbral characteristics** and is intended to be **less sensitive to transient prosodic/emotional cues** than typical speaker-ID embeddings. Background: timbre/prosody disentanglement has recently been explored to improve speaker modeling robustness, aligning with this model’s goal. ([Hugging Face][1])
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
#### **Dataset Structure & Access**
|
| 18 |
+
|
| 19 |
+
The dataset includes the original Emilia audio data **plus**:
|
| 20 |
+
|
| 21 |
+
* **Emotion annotations** (40 categories + 14 attributes + emotion caption).
|
| 22 |
+
* **Timbre speaker embeddings** (vector per sample from Speaker-wavLM-tbr). ([Hugging Face][1])
|
| 23 |
+
* **Characters per second (CPS)** statistic per sample.
|
| 24 |
+
|
| 25 |
+
**Format:** WebDataset (sharded `.tar` files) containing audio and JSON/CSV sidecars for annotations, embeddings, and CPS.
|
| 26 |
+
|
| 27 |
+
> Note: This repository **supersedes** earlier, interim multi-repo distributions. All links to those prior repositories have been removed.
|
| 28 |
+
|
| 29 |
+
The original `.tar` files for the Emilia dataset are also included. Files belonging to the YODAS subset can be identified by a suffix in their filenames.
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
#### **Speaker Embeddings (Timbre-Focused)**
|
| 34 |
+
|
| 35 |
+
* **Model:** Orange/**Speaker-wavLM-tbr** (WavLM-based).
|
| 36 |
+
* **Intent:** Produce embeddings that **globally represent speaker timbre** and **de-emphasize emotion/prosody**, improving stability when emotional state varies. Use cosine similarity on embeddings for clustering/comparison, analogous to ASV, but with a **timbre emphasis**. ([Hugging Face][1])
|
| 37 |
+
* **Background:** WavLM is a self-supervised model family designed for robust speech representation and speaker identity preservation; disentangling timbre from prosody has been shown to improve robustness in several tasks. ([arXiv][2])
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
#### **Derived Metric: Characters per Second (CPS)**
|
| 42 |
+
|
| 43 |
+
For each utterance we compute **CPS = total\_characters / audio\_duration\_seconds**.
|
| 44 |
+
|
| 45 |
+
CPS values are stored alongside the other per-sample annotations in this repository.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
#### **Dataset Statistics**
|
| 50 |
+
|
| 51 |
+
This combined dataset comprises approximately **215,600 hours** of speech, merging the original Emilia dataset with a large portion of the YODAS dataset. The inclusion of YODAS significantly expands the linguistic diversity and the total volume of data.
|
| 52 |
+
|
| 53 |
+
The language distribution is broken down as follows:
|
| 54 |
+
|
| 55 |
+
| Language | Emilia Duration (hours) | Emilia-YODAS Duration (hours) | Total Duration (hours) |
|
| 56 |
+
| :-------- | :---------------------- | :---------------------------- | :--------------------- |
|
| 57 |
+
| English | 46.8k | 92.2k | 139.0k |
|
| 58 |
+
| Chinese | 49.9k | 0.3k | 50.3k |
|
| 59 |
+
| German | 1.6k | 5.6k | 7.2k |
|
| 60 |
+
| French | 1.4k | 7.4k | 8.8k |
|
| 61 |
+
| Japanese | 1.7k | 1.1k | 2.8k |
|
| 62 |
+
| Korean | 0.2k | 7.3k | 7.5k |
|
| 63 |
+
| **Total** | **101.7k** | **113.9k** | **215.6k** |
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
#### **Interpretation of Scores**
|
| 68 |
+
|
| 69 |
+
The models predict raw scores for 40 emotional categories and 14 attribute dimensions. For the emotional categories, these raw scores are also used to calculate a normalized Softmax probability, indicating the relative likelihood of each emotion.
|
| 70 |
+
|
| 71 |
+
| Attribute | Range | Description |
|
| 72 |
+
| :-------------------- | :------- | :------------------------------------------------------- |
|
| 73 |
+
| **Valence** | -3 to +3 | -3: Ext. Negative, +3: Ext. Positive, 0: Neutral |
|
| 74 |
+
| **Arousal** | 0 to 4 | 0: Very Calm, 4: Very Excited, 2: Neutral |
|
| 75 |
+
| **Dominance** | -3 to +3 | -3: Ext. Submissive, +3: Ext. Dominant, 0: Neutral |
|
| 76 |
+
| **Age** | 0 to 6 | 0: Infant/Toddler, 2: Teenager, 4: Adult, 6: Very Old |
|
| 77 |
+
| **Gender** | -2 to +2 | -2: Very Masculine, +2: Very Feminine, 0: Neutral/Unsure |
|
| 78 |
+
| **Humor** | 0 to 4 | 0: Very Serious, 4: Very Humorous, 2: Neutral |
|
| 79 |
+
| **Detachment** | 0 to 4 | 0: Very Vulnerable, 4: Very Detached, 2: Neutral |
|
| 80 |
+
| **Confidence** | 0 to 4 | 0: Very Confident, 4: Very Hesitant, 2: Neutral |
|
| 81 |
+
| **Warmth** | -2 to +2 | -2: Very Cold, +2: Very Warm, 0: Neutral |
|
| 82 |
+
| **Expressiveness** | 0 to 4 | 0: Very Monotone, 4: Very Expressive, 2: Neutral |
|
| 83 |
+
| **Pitch** | 0 to 4 | 0: Very High-Pitched, 4: Very Low-Pitched, 2: Neutral |
|
| 84 |
+
| **Softness** | -2 to +2 | -2: Very Harsh, +2: Very Soft, 0: Neutral |
|
| 85 |
+
| **Authenticity** | 0 to 4 | 0: Very Artificial, 4: Very Genuine, 2: Neutral |
|
| 86 |
+
| **Recording Quality** | 0 to 4 | 0: Very Low, 4: Very High, 2: Decent |
|
| 87 |
+
| **Background Noise** | 0 to 3 | 0: No Noise, 3: Intense Noise |
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
#### **Citation**
|
| 92 |
+
|
| 93 |
+
If you use this dataset, please cite the original Emilia dataset paper as well as the EmoNet-Voice paper.
|
| 94 |
+
|
| 95 |
+
```bibtex
|
| 96 |
+
@inproceedings{emilialarge,
|
| 97 |
+
author={He, Haorui and Shang, Zengqiang and Wang, Chaoren and Li, Xuyuan and Gu, Yicheng and Hua, Hua and Liu, Liwei and Yang, Chen and Li, Jiaqi and Shi, Peiyang and Wang, Yuancheng and Chen, Kai and Zhang, Pengyuan and Wu, Zhizheng},
|
| 98 |
+
title={Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for Speech Generation},
|
| 99 |
+
booktitle={arXiv:2501.15907},
|
| 100 |
+
year={2025}
|
| 101 |
+
}
|
| 102 |
+
|
| 103 |
+
@article{emonet_voice_2025,
|
| 104 |
+
author={Schuhmann, Christoph and Kaczmarczyk, Robert and Rabby, Gollam and Friedrich, Felix and Kraus, Maurice and Nadi, Kourosh and Nguyen, Huu and Kersting, Kristian and Auer, Sören},
|
| 105 |
+
title={EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection},
|
| 106 |
+
journal={arXiv preprint arXiv:2506.09827},
|
| 107 |
+
year={2025}
|
| 108 |
+
}
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
**Additional References (Speaker Embeddings Background):**
|
| 112 |
+
|
| 113 |
+
* **Orange/Speaker-wavLM-tbr** model card (timbre-focused speaker embeddings). ([Hugging Face][1])
|
| 114 |
+
* **WavLM**: self-supervised pre-training for robust speech & speaker identity preservation. ([arXiv][2])
|
| 115 |
+
* **Disentangling timbre and prosody embeddings** (Interspeech 2024). ([isca-archive.org][3])
|