Commit
·
b1035cf
1
Parent(s):
39282c7
Update README.md
Browse files
README.md
CHANGED
@@ -11,13 +11,13 @@ pipeline_tag: text-generation
|
|
11 |
tags:
|
12 |
- text-generation-inference
|
13 |
widget:
|
14 |
-
- text: Astronomia é uma ciência natural que estuda
|
15 |
example_title: Exemplo
|
16 |
-
- text: Em um achado chocante, o cientista descobriu um
|
17 |
example_title: Exemplo
|
18 |
-
- text: Python é uma linguagem de
|
19 |
example_title: Exemplo
|
20 |
-
- text: O Gato de
|
21 |
example_title: Exemplo
|
22 |
inference:
|
23 |
parameters:
|
@@ -67,7 +67,7 @@ This repository has the [source code](https://github.com/Nkluge-correa/Aira) use
|
|
67 |
- [Accelerate](https://github.com/huggingface/accelerate)
|
68 |
- [Codecarbon](https://github.com/mlco2/codecarbon)
|
69 |
|
70 |
-
|
71 |
|
72 |
These are the main arguments used in the training of this model:
|
73 |
|
@@ -177,14 +177,11 @@ for i, completion in enumerate(completions):
|
|
177 |
|
178 |
## Fine Tuning
|
179 |
|
180 |
-
| Models
|
181 |
-
|
182 |
-
| [Teeny Tiny Llama 162m](https://huggingface.co/nicholasKluge/
|
183 |
-
| [
|
184 |
-
| [
|
185 |
-
| [Gpt2-portuguese-small](https://huggingface.co/pierreguillou/gpt2-small-portuguese) | 30.22 | 22.48 | 29.62 | 27.36 | 41.44 |
|
186 |
-
| [Gpt2-small](https://huggingface.co/gpt2) | 29.97 | 21.48 | 31.60 | 25.79 | 40.65 |
|
187 |
-
|
188 |
|
189 |
## Cite as 🤗
|
190 |
|
|
|
11 |
tags:
|
12 |
- text-generation-inference
|
13 |
widget:
|
14 |
+
- text: "Astronomia é uma ciência natural que estuda"
|
15 |
example_title: Exemplo
|
16 |
+
- text: "Em um achado chocante, o cientista descobriu um"
|
17 |
example_title: Exemplo
|
18 |
+
- text: "Python é uma linguagem de"
|
19 |
example_title: Exemplo
|
20 |
+
- text: "O Gato de Botas é conhecido por"
|
21 |
example_title: Exemplo
|
22 |
inference:
|
23 |
parameters:
|
|
|
67 |
- [Accelerate](https://github.com/huggingface/accelerate)
|
68 |
- [Codecarbon](https://github.com/mlco2/codecarbon)
|
69 |
|
70 |
+
## Training Set-up
|
71 |
|
72 |
These are the main arguments used in the training of this model:
|
73 |
|
|
|
177 |
|
178 |
## Fine Tuning
|
179 |
|
180 |
+
| Models | [IMDB](https://huggingface.co/datasets/christykoh/imdb_pt) | [FaQuAD-NLI](https://huggingface.co/datasets/ruanchaves/faquad-nli) | [HateBr](https://huggingface.co/datasets/ruanchaves/hatebr) |
|
181 |
+
|--------------------------------------------------------------------------------------------|------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------------------------|
|
182 |
+
| [Teeny Tiny Llama 162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) | 91.14 | 90.00 | 90.71 |
|
183 |
+
| [Bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) | 92.22 | 93.07 | 91.28 |
|
184 |
+
| [Gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) | 91.60 | 86.46 | 87.42 |
|
|
|
|
|
|
|
185 |
|
186 |
## Cite as 🤗
|
187 |
|