| base_model: HuggingFaceTB/SmolLM-135M-Instruct | |
| datasets: HumanLLMs/Human-Like-DPO-Dataset | |
| library_name: transformers | |
| model_name: trainer_output | |
| tags: | |
| - generated_from_trainer | |
| - reward-trainer | |
| - trl | |
| licence: license | |
| # Model Card for trainer_output | |
| This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) on the [HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset) dataset. | |
| It has been trained using [TRL](https://github.com/huggingface/trl). | |
| ## Quick start | |
| ```python | |
| from transformers import pipeline | |
| text = "The capital of France is Paris." | |
| rewarder = pipeline(model="Galactik/trainer_output", device="cuda") | |
| output = rewarder(text)[0] | |
| print(output["score"]) | |
| ``` | |
| ## Training procedure | |
| This model was trained with Reward. | |
| ### Framework versions | |
| - TRL: 0.25.1 | |
| - Transformers: 4.57.3 | |
| - Pytorch: 2.3.0 | |
| - Datasets: 4.2.0 | |
| - Tokenizers: 0.22.1 | |
| ## Citations | |
| Cite TRL as: | |
| ```bibtex | |
| @misc{vonwerra2022trl, | |
| title = {{TRL: Transformer Reinforcement Learning}}, | |
| author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, | |
| year = 2020, | |
| journal = {GitHub repository}, | |
| publisher = {GitHub}, | |
| howpublished = {\url{https://github.com/huggingface/trl}} | |
| } | |
| ``` |