-
-
-
-
-
-
Inference Providers
Active filters:
codeqwen
Qwen/Qwen2.5-Coder-7B-Instruct
Text Generation
•
8B
•
Updated
•
1.42M
•
•
646
Qwen/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
67.4k
•
179
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation
•
33B
•
Updated
•
731k
•
•
2k
Qwen/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation
•
15B
•
Updated
•
26k
•
91
bartowski/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation
•
33B
•
Updated
•
26.3k
•
100
bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
14.9k
•
36
Text Generation
•
Updated
•
716k
•
•
81
Qwen/Qwen2.5-Coder-1.5B-Instruct
Text Generation
•
2B
•
Updated
•
1.49M
•
•
106
QuantFactory/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
659
•
7
Qwen/Qwen2.5-Coder-14B-Instruct
Text Generation
•
15B
•
Updated
•
407k
•
•
140
Text Generation
•
15B
•
Updated
•
19.3k
•
•
61
Qwen/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation
•
33B
•
Updated
•
150k
•
185
Qwen/Qwen2.5-Coder-14B-Instruct-AWQ
Text Generation
•
15B
•
Updated
•
101k
•
15
bartowski/Qwen2.5-Coder-14B-GGUF
Text Generation
•
15B
•
Updated
•
2.79k
•
12
lmstudio-community/Qwen2.5-Coder-32B-Instruct-MLX-4bit
Text Generation
•
5B
•
Updated
•
35.5k
•
6
unsloth/Qwen2.5-Coder-3B-Instruct-GGUF
3B
•
Updated
•
950
•
10
bartowski/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF
Text Generation
•
15B
•
Updated
•
4.51k
•
19
ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF
Text Generation
•
3B
•
Updated
•
3.15k
•
7
DavidAU/Qwen3-42B-A3B-2507-Thinking-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
•
42B
•
Updated
•
46
•
5
mradermacher/Qwen3-42B-A3B-2507-Thinking-TOTAL-RECALL-v2-Medium-MASTER-CODER-i1-GGUF
42B
•
Updated
•
583
•
4
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
•
42B
•
Updated
•
2.1k
•
31
mradermacher/Qwen3-MOE-2x4B-8B-Jan-Nano-Instruct-II-i1-GGUF
7B
•
Updated
•
78
•
1
patryks1/Qwen2.5-Coder-1.5B-Q4_K_M-GGUF
Text Generation
•
2B
•
Updated
•
102
•
1
mradermacher/Qwen3-MOE-6x1.7B-10.2B-Shining-Madness-Uncensored-i1-GGUF
7B
•
Updated
•
2.7k
•
4
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
7B
•
Updated
•
51
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int8
Text Generation
•
7B
•
Updated
•
48
•
1
Text Generation
•
8B
•
Updated
•
250k
•
•
134
lmstudio-community/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
6.77k
•
20
Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
36.9k
•
37
bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
1.56k
•
10