Datasets:
metadata
dataset_info:
features:
- name: audio1
dtype: string
- name: audio2
dtype: string
- name: model1
dtype: string
- name: model2
dtype: string
- name: Friendly_weighted_score_audio_1
dtype: float64
- name: Friendly_weighted_score_audio_2
dtype: float64
- name: Friendly_detailedResults
list:
- name: userDetails
struct:
- name: age
dtype: string
- name: country
dtype: string
- name: gender
dtype: string
- name: language
dtype: string
- name: occupation
dtype: string
- name: userScores
struct:
- name: audio_on
dtype: float64
- name: global
dtype: float64
- name: votedFor
dtype: string
- name: Natural_weighted_score_audio_1
dtype: float64
- name: Natural_weighted_score_audio_2
dtype: float64
- name: Natural_detailedResults
list:
- name: userDetails
struct:
- name: age
dtype: string
- name: country
dtype: string
- name: gender
dtype: string
- name: language
dtype: string
- name: occupation
dtype: string
- name: userScores
struct:
- name: audio_on
dtype: float64
- name: global
dtype: float64
- name: votedFor
dtype: string
splits:
- name: train
num_bytes: 4487816
num_examples: 4269
download_size: 407878
dataset_size: 4487816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-speech
pretty_name: Text to Audio Human Preference Benchmark
tags:
- t2a
- text-2-audio
- minimax
- google-gemini-2.5-pro-tts
- elevenlabs
- openai
- openai-gpt-4o-tts
- openai-gpt-4o-mini-tts
Text to Audio Human Benchmark
In this dataset, ~32k human responses collected in less than 1h using the Rapidata Python API, accessible to anyone and ideal for large scale evaluation.
The annotators were asked Which voice is more friendly? and Which voice sounds more natural? respectively.
Check out the Benchmark!