static quants of https://huggingface.co/xai-org/grok-2
To run these quants before https://github.com/ggml-org/llama.cpp/pull/15539 is merged you will need to build llama.cpp from https://github.com/ggml-org/llama.cpp/tree/cisc/grok-2
- Downloads last month
- 103
Hardware compatibility
Log In
to add your hardware
2-bit
3-bit
4-bit
5-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for nicoboss/grok-2-GGUF
Base model
xai-org/grok-2