granite-4-micro-gguf

granite-4-micro-gguf is a GGUF Q4_K_M quantized version of granite-4-micro-gguf, providing a fast, small inference implementation, optimized for AI PCs.

Model Description

  • Developed by: ibm-granite
  • Quantized by: bartowksi
  • Model type: granitemoehybrid
  • Parameters: 2 billion
  • Model Parent: ibm-granite/granite-4-micro
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on hf

llmware website

Downloads last month
19
GGUF
Model size
3B params
Architecture
granite
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support