-
-
-
-
-
-
Inference Providers
Active filters:
tool-use
bartowski/miscii-14b-1028-GGUF
Text Generation
•
15B
•
Updated
•
59
•
6
Updated
•
4.15k
•
117
71B
•
Updated
•
15.1k
•
118
QuantFactory/miscii-14b-1028-GGUF
Text Generation
•
15B
•
Updated
•
88
•
3
legionarius/watt-tool-8B-GGUF
8B
•
Updated
•
59
•
2
TimeLordRaps/watt-tool-8B-Q8_0-GGUF
8B
•
Updated
•
2
Nekuromento/watt-tool-8B-Q4_K_M-GGUF
8B
•
Updated
•
2
•
1
Nekuromento/watt-tool-8B-Q5_K_M-GGUF
8B
•
Updated
•
1
Nekuromento/watt-tool-8B-Q6_K-GGUF
8B
•
Updated
•
3
Nekuromento/watt-tool-8B-Q8_0-GGUF
8B
•
Updated
•
6
ejschwartz/watt-tool-8B-Q4_K_M-GGUF
8B
•
Updated
•
3
dwetzel/watt-tool-70B-GPTQ-INT4
11B
•
Updated
•
1
mlx-community/watt-tool-8B
Updated
•
19
•
4
mradermacher/watt-tool-8B-GGUF
8B
•
Updated
•
74
•
5
mradermacher/watt-tool-8B-i1-GGUF
8B
•
Updated
•
136
•
1
mradermacher/watt-tool-70B-GGUF
71B
•
Updated
•
45
mradermacher/watt-tool-70B-i1-GGUF
71B
•
Updated
•
144
Tonic/c4ai-command-a-03-2025-4bit_nf4_double
Text Generation
•
114B
•
Updated
•
5
Tonic/c4ai-command-a-03-2025-4bit_fp4
Text Generation
•
113B
•
Updated
•
5
Tonic/c4ai-command-a-03-2025-4bit_nf4_no_double
Text Generation
•
113B
•
Updated
•
3
fuzzy-mittenz/watt-tool-8B-Q4_K_M-GGUF
8B
•
Updated
•
4
Scotto2025/watt-tool-70B-mlx-6Bit
15B
•
Updated
•
5
Scotto2025/watt-tool-8B-mlx-8Bit
2B
•
Updated
•
6
Salesforce/Llama-xLAM-2-70b-fc-r
Text Generation
•
71B
•
Updated
•
238
•
48
Salesforce/Llama-xLAM-2-8b-fc-r
Text Generation
•
Updated
•
83.8k
•
•
58
Salesforce/xLAM-2-3b-fc-r
Text Generation
•
3B
•
Updated
•
90.1k
•
16
Salesforce/xLAM-2-1b-fc-r
Text Generation
•
2B
•
Updated
•
55.9k
•
•
12
Salesforce/xLAM-2-3b-fc-r-gguf
Text Generation
•
3B
•
Updated
•
398
•
6
Salesforce/xLAM-2-1b-fc-r-gguf
Text Generation
•
2B
•
Updated
•
1.36k
•
2
Salesforce/Llama-xLAM-2-8b-fc-r-gguf
Text Generation
•
8B
•
Updated
•
575
•
18