Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ license: apache-2.0
|
|
| 8 |
|
| 9 |
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
|
| 10 |
|
| 11 |
-
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages]() by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
|
| 12 |
|
| 13 |
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
|
| 14 |
|
|
@@ -76,7 +76,7 @@ You might want to truncate the model output with `<eom>`.
|
|
| 76 |
|
| 77 |
## Training data
|
| 78 |
|
| 79 |
-
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](). Supported languages (and frameworks) are as follows:
|
| 80 |
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
|
| 81 |
|
| 82 |
## Training procedure
|
|
@@ -87,7 +87,7 @@ Please refer to the paper for more details.
|
|
| 87 |
|
| 88 |
## Evaluation results
|
| 89 |
|
| 90 |
-
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper]() for more details.
|
| 91 |
|
| 92 |
## Intended use and limitations
|
| 93 |
|
|
@@ -102,6 +102,6 @@ However, the model is intended for and best at **program synthesis**, that is, g
|
|
| 102 |
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
|
| 103 |
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
|
| 104 |
journal={arXiv preprint},
|
| 105 |
-
year={
|
| 106 |
}
|
| 107 |
```
|
|
|
|
| 8 |
|
| 9 |
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
|
| 10 |
|
| 11 |
+
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
|
| 12 |
|
| 13 |
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
|
| 14 |
|
|
|
|
| 76 |
|
| 77 |
## Training data
|
| 78 |
|
| 79 |
+
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
|
| 80 |
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
|
| 81 |
|
| 82 |
## Training procedure
|
|
|
|
| 87 |
|
| 88 |
## Evaluation results
|
| 89 |
|
| 90 |
+
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
|
| 91 |
|
| 92 |
## Intended use and limitations
|
| 93 |
|
|
|
|
| 102 |
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
|
| 103 |
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
|
| 104 |
journal={arXiv preprint},
|
| 105 |
+
year={2023}
|
| 106 |
}
|
| 107 |
```
|