exp_001_base_vanilla

This model is a fine-tuned version of openai/whisper-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0281
  • Wer: 21.2201
  • Wer Ortho: 22.5616
  • Cer: 8.5150

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss Wer Wer Ortho Cer
2.6093 0.6410 500 0.5502 55.5654 59.1115 17.0550
1.8138 1.2821 1000 0.3793 44.3549 47.6208 14.6009
1.3138 1.9231 1500 0.2431 41.7559 44.2339 16.5479
0.7587 2.5641 2000 0.1512 33.0562 35.4943 11.5486
0.3661 3.2051 2500 0.1045 29.6133 31.8373 10.6197
0.3351 3.8462 3000 0.0754 30.1045 32.1406 11.4877
0.1857 4.4872 3500 0.0664 28.7568 30.6155 10.9869
0.0939 5.1282 4000 0.0549 28.3327 30.1542 10.8709
0.1039 5.7692 4500 0.0493 28.3159 30.0586 10.8963
0.0600 6.4103 5000 0.0437 26.5819 28.2966 10.3570
0.0486 7.0513 5500 0.0432 26.7708 28.4628 10.5080
0.0351 7.6923 6000 0.0388 25.8219 27.5111 10.2674
0.0212 8.3333 6500 0.0369 24.9276 26.6301 9.8892
0.0189 8.9744 7000 0.0340 25.5574 27.0789 10.0536
0.0117 9.6154 7500 0.0342 23.9829 25.5995 9.6716
0.0054 10.2564 8000 0.0310 23.1473 24.5439 9.1223
0.0038 10.8974 8500 0.0290 22.4126 23.8457 9.0356
0.0011 11.5385 9000 0.0285 21.6904 23.0063 8.6243
0.0005 12.1795 9500 0.0283 21.4847 22.8525 8.5869
0.0004 12.8205 10000 0.0281 21.2201 22.5616 8.5150

Framework versions

  • Transformers 5.0.0
  • Pytorch 2.10.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.2
Downloads last month
179
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ihanif/exp_001_base_vanilla

Finetuned
(614)
this model