Define "do_sample" explicitly in generation_config.json
#6 opened 10 days ago
by
Corellios
Update config.json
#5 opened 10 days ago
by
Corellios
Update inference examples to use the correct chat template
#4 opened 11 days ago
by
mario-sanz
Endless reasoning loop when serving the model with vLLM
3
#2 opened 13 days ago
by
sliuau