Model Card for Model ID

HuggingFace 🤗 - Repository

DDP is very un-stable, please use the single-gpu training script - if you still want to do it, I suggest uncommenting the grad clipping lines; that should help a lot.

This Vocoder, is a combination of HiFTnet and Ringformer. it supports Ring Attention, Conformer and Neural Source Filtering etc. This repository is experimental, expect some bugs and some hardcoded params.

The default setting is 44.1khz - 128 Mel bins. if you want to change it to 24khz, copy the config from HiFTnet (make sure to copy its pitch extractor, both the model + the checkpoint.), then change 128 to 80 in LN-384 of the models.py. then uncomment the "multiscale_subband_cfg" for the 24khz version.

Huge Thanks to Johnathan Duering for his help. I mostly implemented this based on his STTS2 Fork.

This is highly experimental, I have not conducted a full session training. I just tested that the loss goes down and the eval samples sound reasonable for ~10K steps of minimal training.


NOTE: I have uploaded Two checkpoints so far. one is 24khz for HiFormer, trained for roughly 117K~ steps on LibriTTS (360 + 100) and 40 hours of other English datasets.

the other checkpoint is HiFTNet, 44.1khz on more than 1100 Hours of Multilingual data, sourced privately by myself. it includes Arabic, Persian, Japanese, English and Russian. this one is trained for ~100K steps. Ideally both should be trained up to 1M steps, so I strongly recommend you to further fine-tune it on your own downstream task until I pre-train these for more steps.

Pre-requisites

  1. Python >= 3.10
  2. Clone this repository:
git clone https://github.com/Respaired/HiFormer_Vocoder
cd HiFormer_Vocoder/Ringformer
  1. Install python requirements:
pip install -r requirements.txt

Training

CUDA_VISIBLE_DEVICES=0 python train_single_gpu.py --config config_v1.json --[args]

For the F0 model training, please refer to yl4579/PitchExtractor. This repo includes a pre-trained F0 model on a Mixture of Multilingual data for the previously mentioned configuration. I'm going to quote the HiFTnet's Author: "Still, you may want to train your own F0 model for the best performance, particularly for noisy or non-speech data, as we found that F0 estimation accuracy is essential for the vocoder performance."

Inference

Please refer to the notebook inference.ipynb for details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.