resume_llama-3.2-3b_q6_K-finetuned.llamafile

Overview

This fine-tuned Llama-3.2-3B model is my AI alter ego, trained to chat about my AI expertise and projects. It demonstrates my skills in model fine-tuning, quantization, and offline deployment for privacy-focused applications.

How to Run

Local

  1. Download the llamafile.
  2. Linux: chmod +x resume_llama-3.2-3b_q6_K-finetuned.llamafile
  3. Windows: Rename to .exe and run.
  4. If you want to play with parameters like temperature or top-p just run ./resume_llama-3.2-3b_q6_K-finetuned.llamafile --help

Colab

  1. Open Colab link.

  2. Click menu "Runtime -> Run All"

  3. Wait when a similar string appears in the end of the output of the second cell:

    "Running on public URL: https://e787748f2a3316421f.gradio.live"

  4. Open the URL in a new tab or window.

  5. Enjoy the chat.

About Me

I specialize in custom AI solutions. Connect with me on LinkedIn

Downloads last month
25
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Molchevsky/resume_llama-3.2-3b_q6_K-finetuned.llamafile

Finetuned
(881)
this model