Spaces:
Configuration error
Configuration error
WARN: WIP (code is ugly, bad documentation, may have bugs, test files aren't included, CPU inference was barely tested, etc.)
Intro
:peach:speaches
is a web server that supports real-time transcription using WebSockets.
- faster-whisper is used as the backend. Both GPU and CPU inference are supported.
- LocalAgreement2 (paper | original implementation) algorithm is used for real-time transcription.
- Can be deployed using Docker (Compose configuration can be found in compose.yaml).
- All configuration is done through environment variables. See config.py.
- NOTE: only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications. Please create an issue if you find a bug, have a question, or a feature suggestion.
Quick Start
Spinning up a speaches
web server
docker run --gpus=all --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cuda
# or
docker run --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cpu
Streaming audio data from a microphone. websocat installation is required.
ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions
# or
arecord -f S16_LE -c1 -r 16000 -t raw -D default 2>/dev/null | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions
Streaming audio data from a file.
ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - > output.raw
# send all data at once
cat output.raw | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions
# Output: {"text":"One,"}{"text":"One, two, three, four, five."}{"text":"One, two, three, four, five."}%
# streaming 16000 samples per second. each sample is 2 bytes
cat output.raw | pv -qL 32000 | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions
# Output: {"text":"One,"}{"text":"One, two,"}{"text":"One, two, three,"}{"text":"One, two, three, four, five."}{"text":"One, two, three, four, five. one."}%
Transcribing a file
# convert the file if it has a different format
ffmpeg -i output.wav -ac 1 -ar 16000 -f s16le output.raw
curl -X POST -F "[email protected]" http://0.0.0.0:8000/v1/audio/transcriptions
# Output: "{\"text\":\"One, two, three, four, five.\"}"%
Roadmap
- Support file transcription (non-streaming) of multiple formats.
- CLI client.
- Separate the web server related code from the "core", and publish "core" as a package.
- Additional documentation and code comments.
- Write benchmarks for measuring streaming transcription performance. Possible metrics:
- Latency (time when transcription is sent - time between when audio has been received)
- Accuracy (already being measured when testing but the process can be improved)
- Total seconds of audio transcribed / audio duration (since each audio chunk is being processed at least twice)
- Get the API response closer to the format used by OpenAI.
- Integrations...