File size: 5,815 Bytes
7334b7d
bf895a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7334b7d
bf895a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---

pretty_name: Optimized 478-Point 3D Facial Landmark Dataset
language: en
license:
  - apache-2.0
tags:
  - computer-vision
  - affective-computing
  - facial-landmarks
  - mediapipe
  - emotion-recognition
  - feature-extraction
  - video-analysis
  - optimized
source_datasets:
  - thnhthngchu/video-emotion
task_categories:
  - image-classification
task_ids:
  - multi-class-image-classification
  - face-detection
citation:
  - "@misc{VideoEmotionDataset,

    title={Video Emotion},

    author={thnhthngchu},

    year={2020},

    publisher={Kaggle},

    url={https://www.kaggle.com/datasets/thnhthngchu/video-emotion}

    }"
  - "@misc{MediaPipe,

    title={MediaPipe},

    author={Google Inc.},

    year={2020},

    url={https://mediapipe.dev/}

    }"
---


# Dataset Card for 478-Point Normalized 3D Facial Landmark Dataset

## Dataset Description

This dataset provides **pre-extracted, normalized 3D facial landmark features** derived from the **Video Emotion** dataset. It is optimized for efficient training of **emotion recognition** and **facial analysis models**, bypassing the need to process large raw video files.

**License:** The extracted feature data in this Parquet file is licensed under **Apache 2.0**. Note that the original source video files may have separate licensing terms.

Each entry (row in the Parquet) represents a single video frame and contains the corresponding emotion label along with 1434 features representing the x, y, z coordinates for 478 distinct facial landmarks, as generated by the MediaPipe Face Landmarker model.

---

## Data Fields and Structure

The data is provided in a single Parquet file, typically named **`emotion_landmark_dataset.parquet`**.

| Column Name      | Data Type          | Description                                                                                                       |
| :--------------- | :----------------- | :---------------------------------------------------------------------------------------------------------------- |
| `video_filename` | String             | The identifier of the original video file from which the frame was extracted.                                     |
| `frame_num`      | Integer            | The sequential frame index within the original video file.                                                        |
| `emotion`        | String/Categorical | The ground truth emotion label for this **clip**. **Classes include: Angry, Disgust, Fear, Happy, Neutral, Sad.** |
| `x_0` to `x_477` | Float              | The normalized X coordinate (horizontal position) for each of the 478 landmarks (0.0 to 1.0).                     |
| `y_0` to `y_477` | Float              | The normalized Y coordinate (vertical position) for each of the 478 landmarks (0.0 to 1.0).                       |
| `z_0` to `z_477` | Float              | The normalized Z coordinate (depth, relative to the face center) for each of the 478 landmarks.                   |

**Note on Coordinates:** Since the coordinates are **normalized** (0.0 to 1.0), they must be multiplied by the respective pixel width and height of the original frame to visualize them accurately.

---

## Data Collection and Processing

### Source Video Details (Video Emotion Dataset)

- **Source:** [Video Emotion](https://www.kaggle.com/datasets/thnhthngchu/video-emotion) (Kaggle User: thnhthngchu)
- **Domain:** Facial expressions and affective computing, covering a range of scenarios.
- **Labels:** Videos were originally labeled with clip-level emotional categories.
- **License of Original Data:** Users must refer to the licensing terms specified by the original source dataset on Kaggle.

### Feature Extraction Methodology

The features were extracted using the **MediaPipe Face Landmarker** model.

1.  **Frame Extraction:** Each video file was processed frame-by-frame.
2.  **Landmark Detection:** For each frame, the 478 facial landmarks were detected.
3.  **Normalization:** All coordinates (x, y, z) are normalized to the range [0.0, 1.0] relative to the bounding box of the face or the original frame dimensions.

---

## Usage Example and Visualization

To ensure the coordinates have been extracted correctly and to demonstrate the data visually, please refer to the provided **`optimized-3d-facial-landmark-dataset-usage.ipynb`** file in the repository.

This Jupyter Notebook contains a runnable Python example that **loads random video frames**, correctly denormalizes the coordinates using the frame's dimensions, and plots the 478 landmarks on the face.

![Visualization](images/results.png)

---

## Potential Applications

- **Transfer Learning:** Use the landmarks as input features for lightweight classifiers (e.g., LSTMs, simple MLPs) for emotion recognition.
- **Biometrics:** Advanced facial tracking and identity verification research.
- **Data Augmentation:** Analyze feature distribution for generating synthetic training data.

---

## Citation

If you use this dataset in your research or project, please use the citation and acknowledge the original source data.

- **Original Data Source:** [Video Emotion](https://www.kaggle.com/datasets/thnhthngchu/video-emotion) (Kaggle User: thnhthngchu)
- **Extraction Framework:** Google Inc. (2020). MediaPipe. <https://mediapipe.dev/>

- **This Dataset:**

```bibtex

@misc{pasindu_sewmuthu_abewickrama_singhe_2025,

	author       = { Pasindu Sewmuthu Abewickrama Singhe },

	title        = { Optimized_Video_Facial_Landmarks (Revision 7334b7d) },

	year         = 2025,

	url          = { https://huggingface.co/datasets/PSewmuthu/Optimized_Video_Facial_Landmarks },

	doi          = { 10.57967/hf/6765 },

	publisher    = { Hugging Face }

}

```