Datasets:
metadata
license: mit
task_categories:
- image-text-to-text
tags:
- vision-language
- spatial-reasoning
- benchmark
Jigsaw-Puzzles Dataset
Jigsaw-Puzzles is a novel benchmark consisting of 1,100 carefully curated real-world images with high spatial complexity, designed to rigorously evaluate Vision-Language Models' (VLMs) spatial perception, structural understanding, and reasoning capabilities. The dataset minimizes reliance on domain-specific knowledge to better isolate and assess general spatial reasoning, positioning itself as a challenging and diagnostic benchmark for advancing spatial reasoning research in VLMs.
- Paper: Jigsaw-Puzzles: From Seeing to Understanding to Reasoning in Vision-Language Models
- Project Page: https://zesen01.github.io/jigsaw-puzzles
Citation
@article{lyu2025jigsaw,
title={Jigsaw-Puzzles: From Seeing to Understanding to Reasoning in Vision-Language Models},
author={Lyu, Zesen and Zhang, Dandan and Ye, Wei and Li, Fangdi and Jiang, Zhihang and Yang, Yao},
journal={arXiv preprint arXiv:2505.20728},
year={2025}
}