๐งฌ Mistral-LLaMA-Fusion: A Hybrid of Open Weight Titans
๐ Overview
Mistral-LLaMA-Fusion is an experimental merged language model combining the strengths of Mistral-7B-v0.1 and LLaMA-2-7B using the Linear Merge method via MergeKit. This hybrid model aims to balance Mistralโs efficiency and architecture with LLaMA-2โs robustness in reasoning and instruction following.
๐ Created by: [Matteo Khan]
๐ Affiliation: Apprentice at TW3 Partners (Generative AI Research)
๐ License: MIT
๐ Connect on LinkedIn
๐ Model on Hugging Face
๐ง Model Details
- Model Type: Merged Language Model
- Parent Models:
- Merging Method: Linear Merge (via MergeKit)
๐ฏ Intended Use
This model is suited for research in model merging and hybridization, and can be used for:
- โ Text Generation
- โ Instruction Following
- โ Creative Writing
- โ Prompt Engineering Experiments
โ ๏ธ Limitations
As with all merged models, this fusion may inherit and combine weaknesses from both parents:
- โ Possible generation of false, biased, or inappropriate content
- โ ๏ธ Unpredictable behavior in edge cases
- ๐ No guaranteed performance gain across all benchmarks
๐ฌ Merging Configuration
merge_method: linear
dtype: float16
models:
- model: mistralai/Mistral-7B-v0.1
parameters:
t: 1.0
weight: 0.6
- model: meta-llama/Llama-2-7b-hf
parameters:
t: 1.0
weight: 0.4
parameters:
normalize: true
int8_mask: false
layers:
- pattern: "model.*"
๐ Note: No additional fine-tuning was performed. This is a straight merge using MergeKit.
๐ฑ Why Merging?
Merging allows rapid experimentation with existing checkpoints while reducing the computational cost and carbon footprint compared to training from scratch.
๐ How to Use
python
Copier
Modifier
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MatteoKhan/Mistral-LLaMA-Fusion"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto")
prompt = "Explain the benefits of merging language models."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 4