Firworks commited on
Commit
1231763
·
verified ·
1 Parent(s): bb6a6ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -3,6 +3,7 @@ datasets:
3
  - Rombo-Org/Optimized_Reasoning
4
  base_model:
5
  - CohereLabs/command-a-reasoning-08-2025
 
6
  ---
7
  # command-a-reasoning-08-2025-nvfp4
8
 
@@ -18,6 +19,6 @@ Check the original model card for information about this model.
18
  ```sh
19
  sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/command-a-reasoning-08-2025-nvfp4 --dtype auto --max-model-len 32768
20
  ```
21
- This was tested on a B200 cloud instance.
22
 
23
- If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
 
3
  - Rombo-Org/Optimized_Reasoning
4
  base_model:
5
  - CohereLabs/command-a-reasoning-08-2025
6
+ license: cc-by-nc-4.0
7
  ---
8
  # command-a-reasoning-08-2025-nvfp4
9
 
 
19
  ```sh
20
  sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/command-a-reasoning-08-2025-nvfp4 --dtype auto --max-model-len 32768
21
  ```
22
+ This was tested on an RTX Pro 6000 Blackwell cloud instance.
23
 
24
+ If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.