ek-id commited on
Commit
9361ba2
·
1 Parent(s): 1508b83

Add Transformers.js and WebNN example to README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -2
README.md CHANGED
@@ -10,6 +10,7 @@ license: apache-2.0
10
  pipeline_tag: text-classification
11
  tags:
12
  - Intel
 
13
  model-index:
14
  - name: polite-guard
15
  results:
@@ -92,9 +93,33 @@ You can use this model directly with a pipeline for categorizing text into class
92
  ```python
93
  from transformers import pipeline
94
 
95
- classifier = pipeline("text-classification", model="Intel/polite-guard")
96
  text = "Your input text"
97
- print(classifier(text))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  ```
99
  ## Articles
100
 
 
10
  pipeline_tag: text-classification
11
  tags:
12
  - Intel
13
+ - transformers.js
14
  model-index:
15
  - name: polite-guard
16
  results:
 
93
  ```python
94
  from transformers import pipeline
95
 
96
+ classifier = pipeline("text-classification", "Intel/polite-guard")
97
  text = "Your input text"
98
+ output = classifier(text)
99
+ print(output)
100
+ ```
101
+
102
+ The next example demonstrates how to run this model in the browser using Hugging Face's `transformers.js` library with `webnn-gpu` for hardware acceleration.
103
+
104
+ ```html
105
+ <!DOCTYPE html>
106
+ <html>
107
+ <body>
108
+ <h1>WebNN Transformers.js Intel/polite-guard</h1>
109
+ <script type="module">
110
+ import { pipeline } from "https://cdn.jsdelivr.net/npm/@huggingface/transformers";
111
+
112
+ const classifier = await pipeline("text-classification", "Intel/polite-guard", {
113
+ dtype: "fp32",
114
+ device: "webnn-gpu", // You can also try: "webgpu", "webnn", "webnn-npu", "webnn-cpu", "wasm"
115
+ });
116
+
117
+ const text = "Your input text";
118
+ const output = await classifier(text);
119
+ console.log(`${text}: ${output[0].label}`);
120
+ </script>
121
+ </body>
122
+ </html>
123
  ```
124
  ## Articles
125