Commit
·
38fec8e
1
Parent(s):
61624e3
Update README.md
Browse files
README.md
CHANGED
@@ -1,31 +1,47 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
-
license:
|
5 |
-
- mit
|
6 |
widget:
|
7 |
-
- text:
|
|
|
8 |
---
|
9 |
|
10 |
# NetBERT 📶
|
11 |
|
12 |
-
|
|
|
|
|
13 |
|
14 |
## Usage
|
15 |
-
|
|
|
|
|
|
|
16 |
|
17 |
```python
|
18 |
-
import
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
```
|
25 |
|
26 |
## Documentation
|
27 |
|
28 |
-
Detailed documentation on the pre-trained model, its implementation, and the data can be found [
|
29 |
|
30 |
## Citation
|
31 |
|
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
+
license: apache-2.0
|
|
|
5 |
widget:
|
6 |
+
- text: The nodes of a computer network may include [MASK].
|
7 |
+
library_name: transformers
|
8 |
---
|
9 |
|
10 |
# NetBERT 📶
|
11 |
|
12 |
+
<img align="left" src="illustration.jpg" width="100"/>
|
13 |
+
|
14 |
+
NetBERT is a [BERT-base](https://huggingface.co/bert-base-cased) model further pre-trained on a huge corpus of computer networking text (~23Gb).
|
15 |
|
16 |
## Usage
|
17 |
+
|
18 |
+
You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search.
|
19 |
+
|
20 |
+
You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
|
21 |
|
22 |
```python
|
23 |
+
from transformers import pipeline
|
24 |
+
|
25 |
+
unmasker = pipeline('fill-mask', model='antoinelouis/netbert')
|
26 |
+
unmasker("The nodes of a computer network may include [MASK].")
|
27 |
+
```
|
28 |
+
|
29 |
+
You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import AutoTokenizer, AutoModel
|
33 |
+
|
34 |
+
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/netbert')
|
35 |
+
model = AutoModel.from_pretrained('antoinelouis/netbert')
|
36 |
|
37 |
+
text = "Replace me by any text you'd like."
|
38 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
39 |
+
output = model(**encoded_input)
|
40 |
```
|
41 |
|
42 |
## Documentation
|
43 |
|
44 |
+
Detailed documentation on the pre-trained model, its implementation, and the data can be found on [Github](https://github.com/antoiloui/netbert/blob/master/docs/index.md).
|
45 |
|
46 |
## Citation
|
47 |
|