IzzyPutterman commited on
Commit
3f298f9
·
verified ·
1 Parent(s): aa2e566

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -24
README.md CHANGED
@@ -17,14 +17,12 @@ tags:
17
  # Model Overview
18
 
19
  ## Description:
20
- The NVIDIA Llama-3.3-70B Eagle model is the Eagle head of Meta's Llama-3.3-70B model, which is an auto-regressive language model that uses a dense MLP architecture with 70 billion parameters. For more information, please check [here](https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-FP4). The NVIDIA Llama-3.3-70B Eagle3 model incorporates Eagle speculative decoding with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
21
 
22
  This model is ready for commercial/non-commercial use. <br>
23
 
24
  ### License/Terms of Use:
25
- [nvidia-open-model-license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/)
26
-
27
- ADDITIONAL INFORMATION: [Llama 3.3 Community Model License](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). Built with Llama.
28
 
29
  ### Deployment Geography:
30
  Global <br>
@@ -34,27 +32,26 @@ Developers designing AI Agent systems, chatbots, RAG systems, and other AI-power
34
  <br>
35
 
36
  ### Release Date: <br>
37
- Huggingface: Oct 6th, 2025 via [https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-Eagle3] <br>
38
 
39
  ## Model Architecture:
40
  **Architecture Type:** Transformers <br>
41
  **Network Architecture:** Llama-3.3-70B <br>
 
 
42
 
43
- ##Computational Load
44
- **Cumulative Compute: 4.8x10^20
45
- **Estimated Energy and Emissions for Model Training:
46
- *Total kWh = 2500
47
- *Total Emissions (tCO2e) = 0.8075
48
 
49
  ## Input:
50
  **Input Type(s):** Text <br>
51
  **Input Format(s):** String <br>
52
  **Input Parameters:** One Dimensional (1D): Sequences <br>
 
53
 
54
  ## Output:
55
  **Output Type(s):** Text <br>
56
  **Output Format:** String <br>
57
  **Output Parameters:** One-Dimensional (1D): Sequences <br>
 
58
 
59
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
60
 
@@ -70,17 +67,19 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
70
 
71
  The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
72
 
 
 
73
 
74
  ## Training and Evaluation Datasets:
75
 
76
- ** The total size (in number of data points) 503.3K <br>
77
- ** Total number of datasets 2<br>
78
  ** Dataset partition: Training 100%<br>
79
 
80
 
81
  ## Training Dataset:
82
 
83
- **Link:** [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) and [Magpie-Llama-3.1-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered), only prompts from the datasets were used for data synthesis, (the original responses from GPT were not used) for data synthesis, which is then used to train the Eagle modules. Click the links above for more information regarding the dataset. <br>
84
 
85
  ** Data Modality
86
  [Text]
@@ -169,41 +168,43 @@ SUBCARDS:
169
 
170
  |Field:|Response:|
171
  |:---:|:---:|
172
- |Intended Application(s) & Domain(s):| Text generation, reasoning, summarization, and question answering. |
173
  |Model Type: |Text and Image-to-text transformer |
174
  |Intended Users:|This model is intended for developers, researchers, and customers building/utilizing LLMs, while balancing accuracy and efficiency.|
175
  |Output:|Text String(s)|
176
  |Describe how the model works:|Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers|
177
- |Technical Limitations:| The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. Therefore, before deploying any applications of this model, developers should perform safety testing and tuning tailored to their specific applications of the model.|
178
- |Verified to have met prescribed quality standards?|Yes|
179
  |Performance Metrics:|Accuracy, Throughput, and user-side throughput|
180
  |Potential Known Risk| The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. |
181
- |Licensing:| Your usage is governed by the following [license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) |
182
 
183
  # **Bias**
184
 
185
  |Field:|Response:|
186
  |:---:|:---:|
187
- |Participation considerations from adversely impacted groups (protected classes) in model design and testing:|None|
188
  |Measures taken to mitigate against unwanted bias:|None|
189
 
190
  # **Safety & Security**
191
 
192
  |Field:|Response:|
193
  |:---:|:---:|
194
- |Model Application(s):|Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning|
195
- |Describe life critical application (if present):|None Known|
196
- |Use Case Restrictions:|Abide by the [license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) |
197
  |Model and Dataset Restrictions:|The Principle of least privilege (PoLP) is applied limiting access for dataset generation. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.|
198
 
199
  # **Privacy**
200
 
201
  |Field:|Response:|
202
  |:---:|:---:|
203
- |Generatable or Reverse engineerable personal data?|None|
204
- |Was consent obtained for any personal data used?|None Known|
205
  |Personal data used to create this model?|None Known|
206
  |How often is dataset reviewed?|Before Release|
 
207
  |Is there provenance for all datasets used in training?|Yes|
208
  |Does data labeling (annotation, metadata) comply with privacy laws?|Yes|
209
- |Applicable NVIDIA Privacy Policy|https://www.nvidia.com/en-us/about-nvidia/privacy-policy/|
 
 
17
  # Model Overview
18
 
19
  ## Description:
20
+ The NVIDIA Llama-3.3-70B Eagle model is the Eagle head of Meta's Llama-3.3-70B model, which is an auto-regressive language model that uses a dense multilayer perceptron (MLP) architecture with 70 billion parameters. For more information, please check [here](https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-FP4). The NVIDIA Llama-3.3-70B Eagle3 model incorporates Eagle speculative decoding with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
21
 
22
  This model is ready for commercial/non-commercial use. <br>
23
 
24
  ### License/Terms of Use:
25
+ Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). ADDITIONAL INFORMATION: [Llama 3.3 Community Model License](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). Built with Llama.
 
 
26
 
27
  ### Deployment Geography:
28
  Global <br>
 
32
  <br>
33
 
34
  ### Release Date: <br>
35
+ Hugging Face 12/16/2025 via [https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-Eagle3] <br>
36
 
37
  ## Model Architecture:
38
  **Architecture Type:** Transformers <br>
39
  **Network Architecture:** Llama-3.3-70B <br>
40
+ This model was developed based on [https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-NVFP4] <br>
41
+ Number of model parameters 3.2*10^9 <br>
42
 
 
 
 
 
 
43
 
44
  ## Input:
45
  **Input Type(s):** Text <br>
46
  **Input Format(s):** String <br>
47
  **Input Parameters:** One Dimensional (1D): Sequences <br>
48
+ **Other Properties Related to Input:** 128k max context <br>
49
 
50
  ## Output:
51
  **Output Type(s):** Text <br>
52
  **Output Format:** String <br>
53
  **Output Parameters:** One-Dimensional (1D): Sequences <br>
54
+ **Other Properties Related to Output:** 128k max output <br>
55
 
56
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
57
 
 
67
 
68
  The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
69
 
70
+ ## Model Version(s):
71
+ * v1.0-BF16: December 16th, 2026
72
 
73
  ## Training and Evaluation Datasets:
74
 
75
+ **The total size (in number of data points):** 503.3K <br>
76
+ **Total number of datasets:** 2<br>
77
  ** Dataset partition: Training 100%<br>
78
 
79
 
80
  ## Training Dataset:
81
 
82
+ **Link:** [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) and [Magpie-Llama-3.1-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered), only prompts from the datasets were used for data synthesis, (the original responses from GPT were not used), which is then used to train the Eagle modules. Click the links above for more information regarding the dataset. <br>
83
 
84
  ** Data Modality
85
  [Text]
 
168
 
169
  |Field:|Response:|
170
  |:---:|:---:|
171
+ |Intended Task/Domain::| Text generation, reasoning, summarization, and question answering. |
172
  |Model Type: |Text and Image-to-text transformer |
173
  |Intended Users:|This model is intended for developers, researchers, and customers building/utilizing LLMs, while balancing accuracy and efficiency.|
174
  |Output:|Text String(s)|
175
  |Describe how the model works:|Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers|
176
+ |Technical Limitations & Mitigation:| The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. Therefore, before deploying any applications of this model, developers should perform safety testing and tuning tailored to their specific applications of the model.|
177
+ |Verified to have met prescribed NVIDIA quality standards:|Yes|
178
  |Performance Metrics:|Accuracy, Throughput, and user-side throughput|
179
  |Potential Known Risk| The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. |
180
+ |Licensing:| Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). ADDITIONAL INFORMATION: [Llama 3.3 Community Model License](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). Built with Llama. |
181
 
182
  # **Bias**
183
 
184
  |Field:|Response:|
185
  |:---:|:---:|
186
+ |Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing:|None|
187
  |Measures taken to mitigate against unwanted bias:|None|
188
 
189
  # **Safety & Security**
190
 
191
  |Field:|Response:|
192
  |:---:|:---:|
193
+ |Model Application Field(s):|Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning|
194
+ |Describe the life critical impact (if present) | Not Applicable|
195
+ |Use Case Restrictions:|Abide by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). ADDITIONAL INFORMATION: [Llama 3.3 Community Model License](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). Built with Llama. |
196
  |Model and Dataset Restrictions:|The Principle of least privilege (PoLP) is applied limiting access for dataset generation. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.|
197
 
198
  # **Privacy**
199
 
200
  |Field:|Response:|
201
  |:---:|:---:|
202
+ |Generatable or reverse engineerable personal data? | No|
203
+ |Was consent obtained for any personal data used?|Not Applicable|
204
  |Personal data used to create this model?|None Known|
205
  |How often is dataset reviewed?|Before Release|
206
+ |Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model?|No|
207
  |Is there provenance for all datasets used in training?|Yes|
208
  |Does data labeling (annotation, metadata) comply with privacy laws?|Yes|
209
+ |Is data compliant with data subject requests for data correction or removal, if such a request was made?|No, not possible with externally-sourced data.|
210
+ |Applicable NVIDIA Privacy Policy|https://www.nvidia.com/en-us/about-nvidia/privacy-policy/|