Model Stock: All we need is just a few fine-tuned models
Paper
β’
2403.19522
β’
Published
β’
13
This is a merge of pre-trained language models created using mergekit. The purpose of this experiment was to combine the maximum amount of finetuned datasets possible for the Llama 3 8B architecture.
This model was merged using the Model Stock merge method using meta-llama/Meta-Llama-3-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: meta-llama/Meta-Llama-3-8B
- model: jondurbin/bagel-8b-v1.0
- model: Weyaxi/Einstein-v6.1-Llama3-8B
merge_method: model_stock
base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16