Datasets:
Search is not available for this dataset
image
imagewidth (px) 32
8.55k
| label
class label 298
classes |
|---|---|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
0abandon
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
1abdomen
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
|
2ability
|
End of preview. Expand
in Data Studio
Dataset Description
"EN-CLDI" (en-cldi) contains 1690 classes, which contains images paired with verbs and adjectives. Each word within this set is unique and paired with at least 22 images.
It is the English subset of CLDI (cross-lingual dictionary induction) dataset from (Hartmann and Søgaard, 2018).
How to Use
from datasets import load_dataset
# Load the dataset
common_words = load_dataset("jaagli/en-cldi", split="train")
Citation
@article{li-etal-2024-vision-language,
title = "Do Vision and Language Models Share Concepts? A Vector Space Alignment Study",
author = "Li, Jiaang and
Kementchedjhieva, Yova and
Fierro, Constanza and
S{\o}gaard, Anders",
journal = "Transactions of the Association for Computational Linguistics",
volume = "12",
year = "2024",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2024.tacl-1.68/",
doi = "10.1162/tacl_a_00698",
pages = "1232--1249",
abstract = "Large-scale pretrained language models (LMs) are said to {\textquotedblleft}lack the ability to connect utterances to the world{\textquotedblright} (Bender and Koller, 2020), because they do not have {\textquotedblleft}mental models of the world{\textquotedblright} (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT, and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy, and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).1"
}
@inproceedings{hartmann-sogaard-2018-limitations,
title = "Limitations of Cross-Lingual Learning from Image Search",
author = "Hartmann, Mareike and
S{\o}gaard, Anders",
editor = "Augenstein, Isabelle and
Cao, Kris and
He, He and
Hill, Felix and
Gella, Spandana and
Kiros, Jamie and
Mei, Hongyuan and
Misra, Dipendra",
booktitle = "Proceedings of the Third Workshop on Representation Learning for {NLP}",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W18-3021",
doi = "10.18653/v1/W18-3021",
pages = "159--163",
abstract = "Cross-lingual representation learning is an important step in making NLP scale to all the world{'}s languages. Previous work on bilingual lexicon induction suggests that it is possible to learn cross-lingual representations of words based on similarities between images associated with these words. However, that work focused (almost exclusively) on the translation of nouns only. Here, we investigate whether the meaning of other parts-of-speech (POS), in particular adjectives and verbs, can be learned in the same way. Our experiments across five language pairs indicate that previous work does not scale to the problem of learning cross-lingual representations beyond simple nouns.",
}
- Downloads last month
- 603