Cross-Lingual Neural Network Speech Synthesis Based on Multiple Embeddings

Author
Keywords
Abstract
The paper presents a novel architecture and method for speech synthesis in multiple languages, in voices of multiple speakers and in multiple speaking styles, even in cases when speech from a particular speaker in the target language was not present in the training data. The method is based on the application of neural network embedding to combinations of speaker and style IDs, but also to phones in particular phonetic contexts, without any prior linguistic knowledge on their phonetic properties. This enables the network not only to efficiently capture similarities and differences between speakers and speaking styles, but to establish appropriate relationships between phones belonging to different languages, and ultimately to produce synthetic speech in the voice of a certain speaker in a language that he/she has never spoken. The validity of the proposed approach has been confirmed through experiments with models trained on speech corpora of American English and Mexican Spanish. It has also been shown that the proposed approach supports the use of neural vocoders, i.e. that they are able to produce synthesized speech of good quality even in languages that they were not trained on.
Year of Publication
2021
Journal
International Journal of Interactive Multimedia and Artificial Intelligence
Volume
7
Issue
Regular Issue
Number
2
Number of Pages
110-120
Date Published
12/2021
ISSN Number
1989-1660
URL
DOI
Attachment