rajabmondal
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -121,7 +121,9 @@ print(tokenizer.decode(outputs[0]))
|
|
121 |
# Attribution & Other Requirements
|
122 |
|
123 |
The pretraining dataset for the model was curated to include only data with permissive licenses. Despite this, the model is capable of generating source code verbatim from the dataset. The licenses of such code may necessitate attribution and adherence to other specific conditions. To facilitate compliance, we provide a [search index](https://huggingface.co/spaces/bigcode/search) that enables users to trace the origins of generated code within the pretraining data, allowing for proper attribution and adherence to licensing requirements.
|
124 |
-
|
|
|
|
|
125 |
# Limitations
|
126 |
|
127 |
The NT-Java-1.1B model has been trained on publicly available datasets and is offered without any safety guarantees. As with all language models, its outputs are inherently unpredictable, and the generated code may not perform as expected. Additionally, the code may be inefficient or contain bugs and security vulnerabilities. Consequently, it is imperative for users and developers to undertake extensive safety testing and to implement robust filtering mechanisms tailored to their specific needs.
|
@@ -143,14 +145,20 @@ The NT-Java-1.1B model has been trained on publicly available datasets and is of
|
|
143 |
- **GPUs:** 6 NVIDIA A100 80GB
|
144 |
- **Training time:** 10 days
|
145 |
|
|
|
|
|
146 |
## Software
|
147 |
|
148 |
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
|
149 |
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
|
150 |
|
|
|
|
|
151 |
# License
|
152 |
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
153 |
|
|
|
|
|
154 |
# Citation
|
155 |
```
|
156 |
@article{li2023starcoder,
|
|
|
121 |
# Attribution & Other Requirements
|
122 |
|
123 |
The pretraining dataset for the model was curated to include only data with permissive licenses. Despite this, the model is capable of generating source code verbatim from the dataset. The licenses of such code may necessitate attribution and adherence to other specific conditions. To facilitate compliance, we provide a [search index](https://huggingface.co/spaces/bigcode/search) that enables users to trace the origins of generated code within the pretraining data, allowing for proper attribution and adherence to licensing requirements.
|
124 |
+
|
125 |
+
<br>
|
126 |
+
|
127 |
# Limitations
|
128 |
|
129 |
The NT-Java-1.1B model has been trained on publicly available datasets and is offered without any safety guarantees. As with all language models, its outputs are inherently unpredictable, and the generated code may not perform as expected. Additionally, the code may be inefficient or contain bugs and security vulnerabilities. Consequently, it is imperative for users and developers to undertake extensive safety testing and to implement robust filtering mechanisms tailored to their specific needs.
|
|
|
145 |
- **GPUs:** 6 NVIDIA A100 80GB
|
146 |
- **Training time:** 10 days
|
147 |
|
148 |
+
<br>
|
149 |
+
|
150 |
## Software
|
151 |
|
152 |
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
|
153 |
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
|
154 |
|
155 |
+
<br>
|
156 |
+
|
157 |
# License
|
158 |
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
159 |
|
160 |
+
<br>
|
161 |
+
|
162 |
# Citation
|
163 |
```
|
164 |
@article{li2023starcoder,
|