Commit
·
cd97f59
1
Parent(s):
69af314
Update README.md (#28)
Browse files- Update README.md (8c843dd7298d252b3c38faaacc70c8a2fc95a0a8)
Co-authored-by: unicorn chan <UnicornChan@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -169,10 +169,13 @@ We recommend using [vLLM](https://github.com/vllm-project/vllm) to serve MiniMax
|
|
| 169 |
|
| 170 |
We recommend using [Transformers](https://github.com/huggingface/transformers) to serve MiniMax-M2.1. Please refer to our [Transformers Deployment Guide](./docs/transformers_deploy_guide.md).
|
| 171 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
### Other Inference Engines
|
| 173 |
|
| 174 |
- [MLX-LM](./docs/mlx_deploy_guide.md)
|
| 175 |
-
- [KTransformers](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/MiniMax-M2.1-Tutorial.md)
|
| 176 |
|
| 177 |
### Inference Parameters
|
| 178 |
|
|
|
|
| 169 |
|
| 170 |
We recommend using [Transformers](https://github.com/huggingface/transformers) to serve MiniMax-M2.1. Please refer to our [Transformers Deployment Guide](./docs/transformers_deploy_guide.md).
|
| 171 |
|
| 172 |
+
### KTransformers
|
| 173 |
+
|
| 174 |
+
We recommend using [KTransformers](https://github.com/kvcache-ai/ktransformers) to serve MiniMax-M2.1. Please refer to [KTransformers Deployment Guide](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/MiniMax-M2.1-Tutorial.md)
|
| 175 |
+
|
| 176 |
### Other Inference Engines
|
| 177 |
|
| 178 |
- [MLX-LM](./docs/mlx_deploy_guide.md)
|
|
|
|
| 179 |
|
| 180 |
### Inference Parameters
|
| 181 |
|