A Hierarchical Speaker Representation Framework for One-shot Singing Voice Conversion
Abstract
Existing singing voice conversion (SVC) systems are typically conditioned on an embedding vector, extracted from either a speaker lookup table (LUT) or a speaker recognition network (SRN) to model speaker identity. However, singing contains more expressive speaker characteristics than conversational speech. It is suspected that a single embedding vector may only capture averaged and coarse-grained speaker characteristics, which is insufficient for the SVC task. To this end, this work proposes a novel hierarchical speaker representation framework for SVC, which can capture fine-grained speaker characteristics at different granularity. Specifically, a U-net-like structure is adopted that consists of an up-sampling stream and a down-sampling stream. The up-sampling stream transforms the linguistic features into audio samples, while the down-sampling stream operates in the reverse direction. It is expected that the temporal statistics within each down-sampling block can represent speaker characteristics at different granularity, which is engaged in the up-sampling blocks to enhance the speaker modeling. Experiment results verify that the proposed method outperforms both the LUT and SRN based SVC systems. Moreover, the proposed system supports the one-shot SVC with only a few seconds of reference audio.
Compared Systems
- LUT SVC: the speaker lookup table based SVC system
- ECAPA-TDNN SVC: the speaker embedding based SVC system, where the ECAPA-TDNN is used as the embedding extractor
- U-net SVC: the proposed system
Audio Samples
In-set Evaluation
Source sample from VKOW (NUS-48E)
References (NUS-48E) | LUT SVC | ECAPA-TDNN SVC | U-net SVC (proposed) |
---|---|---|---|
Out-set Evaluation
Source sample from VKOW (NUS-48E)
References (NHSS) | ECAPA-TDNN SVC | U-net SVC (proposed) |
---|---|---|