petergilani commited on
Commit
f318c5c
·
verified ·
1 Parent(s): 52af4e3

Update model card for petergilani/MiniMax-M2.5-mix3-6bit

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -32,14 +32,12 @@ Mixed precision quantized version of MiniMax M2.5 using mlx-lm with `--quant-pre
32
  | Property | Value |
33
  |----------|-------|
34
  | **Base Model** | [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) |
35
- | **Quantization Method** | mlx-lm v0.30.7 with `--quant-predicate mixed_3_6` |
36
  | **Library** | [mlx-lm](https://github.com/huggingface/mlx-lm) |
37
  | **License** | modified-mit |
38
 
39
  ## Inference Parameters
40
 
41
- Recommended generation parameters (from original model):
42
-
43
  | Parameter | Value |
44
  |-----------|-------|
45
  | **temperature** | 1.0 |
 
32
  | Property | Value |
33
  |----------|-------|
34
  | **Base Model** | [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) |
35
+ | **Quantization** | mlx-lm v0.30.7 with `--quant-predicate mixed_3_6` |
36
  | **Library** | [mlx-lm](https://github.com/huggingface/mlx-lm) |
37
  | **License** | modified-mit |
38
 
39
  ## Inference Parameters
40
 
 
 
41
  | Parameter | Value |
42
  |-----------|-------|
43
  | **temperature** | 1.0 |