liquidai-lfm2-2.6b-gguf

liquidai-lfm2-2.6b-gguf is a GGUF Q4_K_M quantized version of LiquidAI LFM2-2.6b, providing a fast, small inference implementation, optimized for AI PCs.

Model Description

  • Developed by: LiquidAI
  • Quantized by: LiquidAI
  • Model type: lfm2
  • Parameters: 2.6 billion
  • Model Parent: LiquidAI/LFM2-2.6B
  • Language(s) (NLP): English
  • License: lfm1.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on hf

llmware website

Downloads last month
188
GGUF
Model size
3B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for llmware/liquidai-lfm2-2.6b-gguf

Base model

LiquidAI/LFM2-2.6B
Quantized
(20)
this model