Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
In a Training Loop ๐
175.0
TFLOPS
399
3
338
David Belton
PRO
DavidAU
Follow
Coelicidium's profile picture
rodbiren's profile picture
RobSchwien's profile picture
2608 followers
ยท
20 following
David-AU-github
DISCORD: David_AU [drawless111]
AI & ML interests
Application(s) of single/multiple LLMs in specialized use cases & automation tasks. LLM, Prompt , System Role and Parameter engineering VIA chat / API. 500+ LLMs graded.
Recent Activity
replied
to
their
post
about 2 hours ago
SAVANT COMMANDER: 48B-A4B , 256k Context, GATED MOE. I am going to showcase some other people's tuning work, that I have put into a GATED Distill MOE (Qwen3) ; 256 K context. Special thanks to all the tuners (listed in the model tree and repo page with special shoutout to "TeichAI" - using Unsloth for a lot of the Distills in this model): Savant Commander is a specialized MOE model that allows you to control which expert(s) (of 12) are assigned to your use case(s) / prompt(s) ... directly (by name(s)), as opposed to having the "choices" made for you. The model is composed of 12 DISTILLS (compressed 12x4B MOE) of top closed (GPT5.1, OpenAI 120 GPT Oss, Gemini (3), Claude (2) ) and open source models (Kimi, GLM, Deepseek, Command-A, JanV1 ) all in one. 256k Context, 2 experts activated. PS: There is also a "heretic" / "decensored" version too ; listed on this model page. https://huggingface.co/DavidAU/Qwen3-48B-A4B-Savant-Commander-GATED-12x-Closed-Open-Source-Distill-GGUF
new
activity
about 2 hours ago
DavidAU/Llama-3.2-8X4B-MOE-V2-Dark-Champion-Instruct-uncensored-abliterated-21B-GGUF:
How to get this model to make sense?
new
activity
about 2 hours ago
DavidAU/LLama-3.1-128k-Darkest-Planet-Uncensored-16.5B-GGUF:
wow
View all activity
Organizations
None yet
DavidAU
's Spaces
1
Sort:ย Recently updated
Running
62
GGUF Model VRAM Calculator
๐
Calculate VRAM requirements for LLM models