--- language: - code license: llama2 tags: - llama-2 - mlx pipeline_tag: text-generation --- ![Alt text](https://media.discordapp.net/attachments/989904887330521099/1201717650791858257/Llama_Coding_on_MacBook.png?ex=65cad5c6&is=65b860c6&hm=bb5bbaf0d097d44b054e0da8aae074203744078a7e2355c6724a2f7e68be9cb5&=&format=webp&quality=lossless&width=1840&height=1840) # mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX This model was converted to MLX format from [`codellama/CodeLlama-13b-Instruct-hf`](). Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX") response = generate(model, tokenizer, prompt="[INST] <> You are a helpful, respectful, and honest assistant. Always do... If you are unsure about an answer, truthfully say \"I don't know\" <> What's the meaning of life [/INST]", verbose=True) ```