--- license: cc task_categories: - multiple-choice - visual-question-answering - video-text-to-text language: - en size_categories: - 1K ## 🔔 News 🔥[2025-12]: Our MMSI-Video-Bench has been integrated into [VLMEvalKit](https://github.com/open-compass/VLMEvalKit). 🔥[2025-12]: We released our paper, benchmark, and evaluation codes. ## 📊 Data Details All of our data is available on [Hugging Face](https://huggingface.co/datasets/rbler/MMSI-Video-Bench) and includes the following components: 🎥 **Video Data** (`videos.zip`): Contains the video clip file (.mp4) corresponding to each sample. This file is generally not required for most models. 🎥 **Frame Data** (`frames.zip`): Contains the frames (.jpg) extracted from each sample's video at the **base sampling rate**. This rate ensures no key information loss during sampling. Each frame file is named using the format `{timestamp}_frame_{base_interval}_{image_id}` (e.g., 00:06.00_frame_1.50_4), where the timestamp, also shown on the **top-left corner** of the frame, indicates its **capture time in the original recording**. 🖼️ **Reference Image Data** (`ref_images.zip`): Contains the auxiliary images referenced in the questions for each sample. 📝 **Text Annotation** (`mmsivideo.json`):This file contains the annotation information for MMSI-Video-Bench. All time references in the questions correspond to the capture time in the original recording and **align with** the timestamp flag on each frame. Key fields include: ``` { "ref_images": [Paths to auxiliary images referenced in the question,...], "video_list": [ { "path": Video clip file path, "start": Timestamp (in seconds) of the first frame of the video clip in the original recording, "end": Timestamp (in seconds) of the last frame of the video clip in the original recording, "base_fps": Base sampling rate }, ... ], "frames_list": [[Paths to frames sampled at the base sampling rate,...],...], "system_prompt": "...", "task_prompt": Task-specific prompt, "user_prompt": Question text, with