Spaces:
Configuration error
Configuration error
S. Neuhaus
commited on
Commit
·
a6fe46a
1
Parent(s):
81e8315
typos
Browse files
README.md
CHANGED
@@ -1,12 +1,12 @@
|
|
1 |
# Faster Whisper Server
|
2 |
-
`faster-whisper-server` is an OpenAI API
|
3 |
Features:
|
4 |
- GPU and CPU support.
|
5 |
- Easily deployable using Docker.
|
6 |
- **Configurable through environment variables (see [config.py](./src/faster_whisper_server/config.py))**.
|
7 |
- OpenAI API compatible.
|
8 |
-
- Streaming support (transcription is sent via SSE as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it)
|
9 |
-
- Live transcription support (audio is sent via websocket as it's generated)
|
10 |
- Dynamic model loading / offloading. Just specify which model you want to use in the request and it will be loaded automatically. It will then be unloaded after a period of inactivity.
|
11 |
|
12 |
Please create an issue if you find a bug, have a question, or a feature suggestion.
|
@@ -67,7 +67,7 @@ transcript = client.audio.transcriptions.create(
|
|
67 |
print(transcript.text)
|
68 |
```
|
69 |
|
70 |
-
###
|
71 |
```bash
|
72 |
# If `model` isn't specified, the default model is used
|
73 |
curl http://localhost:8000/v1/audio/transcriptions -F "[email protected]"
|
|
|
1 |
# Faster Whisper Server
|
2 |
+
`faster-whisper-server` is an OpenAI API-compatible transcription server which uses [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as its backend.
|
3 |
Features:
|
4 |
- GPU and CPU support.
|
5 |
- Easily deployable using Docker.
|
6 |
- **Configurable through environment variables (see [config.py](./src/faster_whisper_server/config.py))**.
|
7 |
- OpenAI API compatible.
|
8 |
+
- Streaming support (transcription is sent via [SSE](https://en.wikipedia.org/wiki/Server-sent_events) as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it).
|
9 |
+
- Live transcription support (audio is sent via websocket as it's generated).
|
10 |
- Dynamic model loading / offloading. Just specify which model you want to use in the request and it will be loaded automatically. It will then be unloaded after a period of inactivity.
|
11 |
|
12 |
Please create an issue if you find a bug, have a question, or a feature suggestion.
|
|
|
67 |
print(transcript.text)
|
68 |
```
|
69 |
|
70 |
+
### cURL
|
71 |
```bash
|
72 |
# If `model` isn't specified, the default model is used
|
73 |
curl http://localhost:8000/v1/audio/transcriptions -F "[email protected]"
|