resume_llama-3.2-3b_q6_K-finetuned.llamafile
Overview
This fine-tuned Llama-3.2-3B model is my AI alter ego, trained to chat about my AI expertise and projects. It demonstrates my skills in model fine-tuning, quantization, and offline deployment for privacy-focused applications.
How to Run
Local
- Download the llamafile.
- Linux:
chmod +x resume_llama-3.2-3b_q6_K-finetuned.llamafile - Windows: Rename to
.exeand run. - If you want to play with parameters like temperature or top-p just run ./resume_llama-3.2-3b_q6_K-finetuned.llamafile --help
Colab
Open Colab link.
Click menu "Runtime -> Run All"
Wait when a similar string appears in the end of the output of the second cell:
"Running on public URL: https://e787748f2a3316421f.gradio.live"
Open the URL in a new tab or window.
Enjoy the chat.
About Me
I specialize in custom AI solutions. Connect with me on LinkedIn
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Molchevsky/resume_llama-3.2-3b_q6_K-finetuned.llamafile
Base model
meta-llama/Llama-3.2-3B-Instruct