-
-
-
-
-
-
Inference Providers
Active filters:
4bit
legraphista/Phi-3-mini-4k-instruct-update2024_07_03-IMat-GGUF
Text Generation
•
4B
•
Updated
•
690
legraphista/internlm2_5-7b-chat-IMat-GGUF
Text Generation
•
8B
•
Updated
•
1.11k
legraphista/internlm2_5-7b-chat-1m-IMat-GGUF
Text Generation
•
8B
•
Updated
•
1.22k
•
1
legraphista/codegeex4-all-9b-IMat-GGUF
Text Generation
•
9B
•
Updated
•
2.22k
•
8
ModelCloud/DeepSeek-V2-Lite-gptq-4bit
Text Generation
•
16B
•
Updated
•
6
ModelCloud/internlm-2.5-7b-gptq-4bit
Feature Extraction
•
8B
•
Updated
•
7
ModelCloud/internlm-2.5-7b-chat-gptq-4bit
Feature Extraction
•
8B
•
Updated
•
6
ModelCloud/internlm-2.5-7b-chat-1m-gptq-4bit
Feature Extraction
•
8B
•
Updated
•
9
legraphista/NuminaMath-7B-TIR-IMat-GGUF
Text Generation
•
7B
•
Updated
•
510
•
1
legraphista/mathstral-7B-v0.1-IMat-GGUF
Text Generation
•
7B
•
Updated
•
531
Text Generation
•
Updated
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
12B
•
Updated
•
58
•
5
legraphista/Athene-70B-IMat-GGUF
Text Generation
•
71B
•
Updated
•
985
•
3
ModelCloud/gemma-2-27b-it-gptq-4bit
Text Generation
•
28B
•
Updated
•
27
•
12
legraphista/Mistral-Nemo-Instruct-2407-IMat-GGUF
Text Generation
•
12B
•
Updated
•
1.51k
•
2
legraphista/Meta-Llama-3.1-8B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.05k
•
6
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
8B
•
Updated
•
29
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
8B
•
Updated
•
131
legraphista/Meta-Llama-3.1-70B-Instruct-IMat-GGUF
Text Generation
•
71B
•
Updated
•
1.54k
•
11
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
71B
•
Updated
•
36
•
4
legraphista/Llama-Guard-3-8B-IMat-GGUF
Text Generation
•
8B
•
Updated
•
1.52k
•
5
legraphista/Mistral-Large-Instruct-2407-IMat-GGUF
Text Generation
•
123B
•
Updated
•
749
•
29
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-lora-adapters
Text Generation
•
Updated
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-q4_k_m
Text Generation
•
8B
•
Updated
•
42
•
1
ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit
Text Generation
•
123B
•
Updated
•
6
•
1
legraphista/Meta-Llama-3.1-8B-Instruct-abliterated-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.88k
•
1
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
410B
•
Updated
•
5
•
2
legraphista/gemma-2-2b-it-IMat-GGUF
Text Generation
•
3B
•
Updated
•
579
•
2
legraphista/gemma-2-2b-IMat-GGUF
Text Generation
•
3B
•
Updated
•
1.06k
•
1
thesven/Mistral-7B-Instruct-v0.3-GPTQ-4bit
Text Generation
•
7B
•
Updated
•
13
•
1