LLaVA-OneVision-2
Collection
1 item • Updated • 1
video video 180 180 | preview stringclasses 3
values |
|---|---|
[user] Describe what is shown in the video. Please format your answer in Markdown with the required headings below:
0. Context and Environment
1. Main Subject of the Video
2. Actions and Interactions
3. Motion Detail Description
4. Background Changes
5. Highlight Moments
[assistant] ### 0. Context and Environment
The v... | |
[user] Describe the video in detail. Use Markdown and include exactly these headings:
0. Context and Environment
1. Main Subject of the Video
2. Actions and Interactions
3. Motion Detail Description
4. Background Changes
5. Highlight Moments
[assistant] ### 0. Context and Environment
The video opens in a dimly lit indo... | |
[user] Summarize what happens in the video. Please respond in Markdown format with the following section headings:
0. Context and Environment
1. Main Subject of the Video
2. Actions and Interactions
3. Motion Detail Description
4. Background Changes
5. Highlight Moments
[assistant] ### 0. Context and Environment
The vi... |
Training data for the LLaVA-OneVision-2 multimodal model family, covering large-scale video and spatial reasoning corpora used in mid-training.
| Subset | Format | Description |
|---|---|---|
mid_training_video/60s_rest/ |
WebDataset (.tar) |
10,809 shards of ~60s video clips |
mid_training_video/caption_v0/split_30s.jsonl |
JSONL | Captions for 30-second video clips |
mid_training_video/caption_v0/split_60s.jsonl |
JSONL | Captions for 60-second video clips |
mid_training_video/caption_v0/split_180s.jsonl |
JSONL | Captions for 180-second video clips |
mid_training_video/caption_v0/split_gt10min.jsonl |
JSONL | Captions for >10-minute video clips |
spatial/ |
WebDataset (.tar) |
84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) |
mid_training_video/mapping/mapping_{5s,10s,30s,60s,180s,gt10min}.csv |
CSV | Maps each video clip's dst_path to its source youtube_id and [start_time, end_time] window |
The viewer_* configs above expose small Parquet samples so the Hugging Face Dataset Viewer can render the data directly in the browser:
viewer_caption_30s — 5 caption samples from 30-second clipsviewer_caption_60s — 5 caption samples from 60-second clipsviewer_caption_180s — 3 caption samples from 180-second clipsviewer_caption_gt10min — 1 caption sample from >10-minute clipsviewer_spatial — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)These previews are intended for schema inspection only. For training, use the full mid_training_video/ and spatial/ shards.