inferencerlabs commited on
Commit
9ffbd0e
·
verified ·
1 Parent(s): ad39c62

Upload model file

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
13
  # CURRENTLY UPLOADING...
14
  **See GLM-5 MLX over in action - [demonstration video](https://youtu.be/3XCYruBYr-0)**
15
 
16
- #### Tested on across a M3 Ultra 512GB RAM and M4 Max 128GB RAM using [Inferencer v1.10.1 distributed compute](https://inferencer.com)
17
  - Distributed inference ~12.5 tokens/s @ 1000 tokens
18
  - Memory usage: ~444 GB / 49GB
19
 
 
13
  # CURRENTLY UPLOADING...
14
  **See GLM-5 MLX over in action - [demonstration video](https://youtu.be/3XCYruBYr-0)**
15
 
16
+ #### Tested across a M3 Ultra 512GB RAM and M4 Max 128GB RAM with [Inferencer v1.10.1 distributed compute](https://inferencer.com)
17
  - Distributed inference ~12.5 tokens/s @ 1000 tokens
18
  - Memory usage: ~444 GB / 49GB
19