metadata
language:
- en
license: mpl-2.0
base_model: TitleOS/Lightning-1.7B
tags:
- lightning
- hermes-3
- utility
- on-device
- text-generation
- finetune
- mlx
datasets:
- NousResearch/Hermes-3-Dataset
pipeline_tag: text-generation
inference: true
model_creator: TitleOS
library_name: mlx
alexgusevski/Lightning-1.7B-mlx
This model alexgusevski/Lightning-1.7B-mlx was converted to MLX format from TitleOS/Lightning-1.7B using mlx-lm version 0.28.4.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("alexgusevski/Lightning-1.7B-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)