Datasets:
license: cc-by-nc-4.0
task_categories:
- question-answering
- multiple-choice
- visual-question-answering
- image-text-to-text
language:
- en
pretty_name: OST-Bench
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: scan_id
dtype: string
- name: turn_id
dtype: int64
- name: type
dtype: string
- name: new_observations
sequence: string
- name: origin_question
dtype: string
- name: option
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_examples: 10000
configs:
- config_name: default
data_files:
- split: test
path: OST_bench.json
This page contains the data for the paper "OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding."
π Homepage | π Paper | π» Code | π arXiv
Introduction
Download OST-Bench for evaluation only:
huggingface-cli download rbler/OST-Bench --include OST_bench.json,img.zip --repo-type dataset
Download OST-Bench for both training and evaluation:
huggingface-cli download rbler/OST-Bench --repo-type dataset
Dataset Description
The imgs/img_train zipfile contains image data corresponding to 1.4k/7k scenes. Each scene has its own subfolder, which stores the observations captured by the agent while exploring that scene.
OST_bench.json/OST_bench_train.json consists of 10k/50k data samples, where each sample represents one round of Q&A (question and answer) and includes the new observations for that round. The structure of each sample (dictionary) is as follows:
{
"scan_id" (str): Unique identifier for the scene scan,
"system_prompt" (str): Shared context/prompt for the multi-turn conversation,
"turn_id" (int): Index of the current turn in the dialogue,
"type" (str): Question subtype/category,
"origin_question" (str): Original question text,
"answer" (str): Ground-truth answer,
"option" (list[str]): Multiple-choice options,
"new_observations" (list[str]): Relative paths to new observation images (within `imgs` dir),
"user_message" (str): Formatted input prompt for the model,
}
Samples with the same scan_id belong to the same multi-turn conversation group. During model evaluation, each multi-turn conversation group is processed as a unit: the shared system_prompt is provided, and new observations along with questions are fed in sequentially according to turn_id.
Evaluation Instructions
Please refer to our evaluation code for details.