--- license: apache-2.0 dataset_info: features: - name: clean dtype: string - name: perturbed dtype: string - name: attack dtype: string splits: - name: train num_bytes: 7915264 num_examples: 218821 - name: valid num_bytes: 1834227 num_examples: 50572 - name: test num_bytes: 2118775 num_examples: 57989 download_size: 3514148 dataset_size: 11868266 language: - en tags: - diagnostic - perturbation - homoglyphs pretty_name: Ad-Word size_categories: - 100K {sample['perturbed']}") print(f"{response}") ``` ## References - [Le et al., 2022] Le, Thai, et al. "Perturbations in the wild: Leveraging human-written text perturbations for realistic adversarial attack and defense." arXiv preprint arXiv:2203.10346 (2022). - [Eger and Benz, 2020] Eger, Steffen, and Yannik Benz. "From hero to zéroe: A benchmark of low-level adversarial attacks." Proceedings of the 1st conference of the Asia-Pacific chapter of the association for computational linguistics and the 10th international joint conference on natural language processing. 2020. - [Eger et al., 2019] Eger, Steffen, et al. "Text processing like humans do: Visually attacking and shielding NLP systems." arXiv preprint arXiv:1903.11508 (2019). - [Seth et al., 2023] Seth, Dev, et al. "Learning the Legibility of Visual Text Perturbations." arXiv preprint arXiv:2303.05077 (2023). ## Related Resources - Cloze or Close Code Repository (including PhoneE): [GitHub](https://github.com/lmoffett/cloze-or-close) - LEGIT Dataset: [HuggingFace](https://huggingface.co/datasets/dvsth/LEGIT) - Zeroé Repository: [GitHub](https://github.com/yannikbenz/zeroe) - ANTHRO Repository: [GitHub](https://github.com/lethaiq/perturbations-in-the-wild) ## Version History ### v1.0 (January 2025) - Initial release of the AdWord dataset - Set of perturbations from 9 attack strategies - Train/valid/test splits with unique clean-perturbed pairs ## License This dataset is licensed under Apache 2.0. ## Citation If you use this dataset in your research, please the original paper: ```bibtex @inproceedings{moffett-dhingra-2025-close, title = "Close or Cloze? Assessing the Robustness of Large Language Models to Adversarial Perturbations via Word Recovery", author = "Moffett, Luke and Dhingra, Bhuwan", booktitle = "Proceedings of the 31st International Conference on Computational Linguistics", year = "2025", publisher = "Association for Computational Linguistics", pages = "6999--7019" } ``` ## Limitations There is no definitive measurement of the effectiveness of these attacks. The original paper provides human baselines, but there are many factors that effect the recoverability of perturbated words. When applying these attacks to new problems, researchers should ensure that the attacks align with their expections. For instance, the ANTHRO attacks are sourced from public internet corpora. In some cases, there are very few attacks for a given word, and, in many cases, those attacks only involve casing changes.