Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
ad-word / README.md
LMoffett's picture
Initial README.md for AD-Word
876cf45 verified
|
raw
history blame
6.91 kB
metadata
license: apache-2.0
dataset_info:
  features:
    - name: clean
      dtype: string
    - name: perturbed
      dtype: string
    - name: attack
      dtype: string
  splits:
    - name: train
      num_bytes: 7915264
      num_examples: 218821
    - name: valid
      num_bytes: 1834227
      num_examples: 50572
    - name: test
      num_bytes: 2118775
      num_examples: 57989
  download_size: 3514148
  dataset_size: 11868266
language:
  - en
tags:
  - diagnostic
  - perturbation
  - homoglyphs
pretty_name: Ad-Word
size_categories:
  - 100K<n<1M

Ad-Word Dataset

The Ad-Word dataset contains adversarial word perturbations created using 9 different attack strategies, organized into three classes: phonetic, typo, and visual attacks. The dataset, introduced in "Close or Cloze? Assessing the Robustness of Large Language Models to Adversarial Perturbations via Word Recovery", contains 7,911 words perturbed multiple times with each attack strategy, creating 327,382 pairs of clean and perturbed words organized by attack.

Dataset Construction

The base vocabulary was constructed from the most frequent 10,000 words in the Trillion Word Corpus, excluding words shorter than four characters. Finally, the dataset was augmented with:

  • 250 uncommon English words added to the test set
  • 100 common English borrowed words that are frequently stylized with accents (50 in train, 25 in test, 25 in validation)

These additions were sampled from the Wikitext corpus (wikitext-103-v1) to help bound the performance of models that ignore non-ASCII characters or use limited dictionaries.

Attack Strategies

The perturbations are organized into three classes. The classes are organized by what information they are meant to preserve. For instance, visual attacks use homoglyphs that are visually similar, but may not preserve phonetic similarity if rendered phonetically.

  1. Phonetic Attacks

    • ANTHRO Phonetic [Le et al., 2022]
    • PhoneE (introduced in Moffett and Dhingra, 2025)
    • Zeroé Phonetic [Eger and Benz, 2020]
  2. Typo Attacks

    • ANTHRO Typo [Le et al., 2022]
    • Zeroé Noise [Eger and Benz, 2020]
    • Zeroé Typo [Eger and Benz, 2020]
  3. Visual Attacks

    • DCES [Eger et al., 2019]
    • ICES [Eger et al., 2019]
    • LEGIT [Seth et al., 2023]

Per-Attack Unique Clean-Perturbed Pairs

Attack Class Attack Name Train Valid Test
phonetic anthro_phonetic 17,649 4,098 4,787
phonetic phonee 24,339 5,551 6,439
phonetic zeroe_phonetic 28,562 6,514 7,468
typo anthro_typo 15,437 3,587 4,137
typo zeroe_noise 27,079 6,233 7,173
typo zeroe_typo 19,912 4,721 5,314
visual dces 28,722 6,625 7,560
visual ices 29,324 6,762 7,713
visual legit 27,796 6,481 7,398

Dataset Structure

The dataset contains the following columns:

  • clean: The original word
  • perturbed: The perturbed version of the word
  • attack: The attack strategy used to perturb the words

The dataset is split into train/valid/test splits, with each split containing an indepedent set of words perturbations from all attack strategies. There are 5,131 unique clean words in the train split, 1,214 in the valid split, and 1,584 in the test split.

Usage Example

from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
import random

adword = load_dataset("lmoffett/ad-word")

model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

samples = random.sample(list(adword['test']), 3)

# Test recovery
for sample in samples:
    # This is not a tuned prompt, just a simple example
    prompt = f"""This word has a typo in it. Can you figure out what the original word was?
    Word with typo: "{sample['perturbed']}"
    Oh, "{sample['perturbed']}" is a misspelling of the word \""""
    
    inputs = tokenizer(prompt, return_tensors="pt", max_length=512, truncation=True)
    outputs = model.generate(**inputs, max_new_tokens=5)
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    
    print('-' * 60)
    print(f"{sample['clean']} -> {sample['perturbed']}")
    print(f"{response}")

References

  • [Le et al., 2022] Le, Thai, et al. "Perturbations in the wild: Leveraging human-written text perturbations for realistic adversarial attack and defense." arXiv preprint arXiv:2203.10346 (2022).
  • [Eger and Benz, 2020] Eger, Steffen, and Yannik Benz. "From hero to zéroe: A benchmark of low-level adversarial attacks." Proceedings of the 1st conference of the Asia-Pacific chapter of the association for computational linguistics and the 10th international joint conference on natural language processing. 2020.
  • [Eger et al., 2019] Eger, Steffen, et al. "Text processing like humans do: Visually attacking and shielding NLP systems." arXiv preprint arXiv:1903.11508 (2019).
  • [Seth et al., 2023] Seth, Dev, et al. "Learning the Legibility of Visual Text Perturbations." arXiv preprint arXiv:2303.05077 (2023).

Related Resources

Version History

v1.0 (January 2025)

  • Initial release of the AdWord dataset
  • Set of perturbations from 9 attack strategies
  • Train/valid/test splits with unique clean-perturbed pairs

License

This dataset is licensed under Apache 2.0.

Citation

If you use this dataset in your research, please the original paper:

@inproceedings{moffett-dhingra-2025-close,
    title = "Close or Cloze? Assessing the Robustness of Large Language Models to Adversarial Perturbations via Word Recovery",
    author = "Moffett, Luke and Dhingra, Bhuwan",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    year = "2025",
    publisher = "Association for Computational Linguistics",
    pages = "6999--7019"
}

Limitations

There is no definitive measurement of the effectiveness of these attacks. The original paper provides human baselines, but there are many factors that effect the recoverability of perturbated words. When applying these attacks to new problems, researchers should ensure that the attacks align with their expections. For instance, the ANTHRO attacks are sourced from public internet corpora. In some cases, there are very few attacks for a given word, and, in many cases, those attacks only involve casing changes.