granite-4-micro-gguf
granite-4-micro-gguf is a GGUF Q4_K_M quantized version of granite-4-micro-gguf, providing a fast, small inference implementation, optimized for AI PCs.
Model Description
- Developed by: ibm-granite
- Quantized by: bartowksi
- Model type: granitemoehybrid
- Parameters: 2 billion
- Model Parent: ibm-granite/granite-4-micro
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Chat, general-purpose LLM
- Quantization: int4
Model Card Contact
- Downloads last month
- 19
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support