skip to main content
10.1145/3422839.3423063acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Neural Style Transfer Based Voice Mimicking for Personalized Audio Stories

Published: 12 October 2020 Publication History

Abstract

This paper demonstrates a CNN based neural style transfer on audio dataset to make storytelling a personalized experience by asking users to record a few sentences that are used to mimic their voice. User audios are converted to spectrograms, the style of which is transferred to the spectrogram of a base voice narrating the story. This neural style transfer is similar to the style transfer on images. This approach stands out as it needs a small dataset and therefore, also takes less time to train the model. This project is intended specifically for children who prefer digital interaction and are also increasingly leaving behind the storytelling culture and for working parents who are not able to spend enough time with their children. By using a parent's initial recording to narrate a given story, it is designed to serve as a conjunction between storytelling and screen-time to incorporate children's interest through the implicit ethical themes of the stories, connecting children to their loved ones simultaneously ensuring an innocuous and meaningful learning experience.

References

[1]
[n.d.]. LibROSA¶. https://librosa.github.io/librosa/
[2]
[n.d.]. Luka The Reading Companion for Kids. https://www.facebook.com/ worldofluka/
[3]
Kuan Chen, Bo Chen, Jiahao Lai, and Kai Yu. 2018. High-quality Voice Conversion Using Spectrogram-Based WaveNet Vocoder. In Interspeech. 1993--1997.
[4]
Mireia Farrús Cabeceran, Michael Wagner, Daniel Erro Eslava, and Francisco Javier Hernando Pericás. 2010. Automatic speaker recognition as a measurement of voice imitation and conversion. The Intenational Journal of Speech. Language and the Law 1, 17 (2010), 119--142.
[5]
Yang Gao, Rita Singh, and Bhiksha Raj. 2018. Voice impersonation using generative adversarial networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2506--2510.
[6]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. CoRR abs/1508.06576 (2015). arXiv:1508.06576 http://arxiv.org/abs/1508.06576
[7]
Eric Grinstein, Ngoc Duong, Alexey Ozerov, and Patrick Pérez. 2018. Audio style transfer. https://arxiv.org/abs/1710.11385
[8]
WIRED Insider. 2018. How Lyrebird Uses AI to Find Its (Artificial) Voice. https: //www.wired.com/brandlab/2018/10/lyrebird-uses-ai-find-artificial-voice/
[9]
Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, et al. 2018. Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In Advances in neural information processing systems. 4480--4490.
[10]
Younggun Lee, Taesu Kim, and Soo-Young Lee. 2018. Voice Imitating Text-toSpeech Neural Networks. arXiv preprint arXiv:1806.00927 (2018).
[11]
Mazzzystar. 2019. mazzzystar/randomCNN-voice-transfer. https://github.com/ mazzzystar/randomCNN-voice-transfer
[12]
A. V. Oppenheim. 1970. Speech spectrograms using the fast Fourier transform. IEEE Spectrum 7, 8 (1970), 57--62.
[13]
Marco Pasini. 2019. Voice Translation and Audio Style Transfer with GANs. Medium (Nov 2019). https://towardsdatascience.com/voice-translation-andaudio-style-transfer-with-gans-b63d58f61854
[14]
Hossein Salehghaffari. 2018. Speaker Verification using Convolutional Neural Networks. arXiv preprint arXiv:1803.05427 (2018).
[15]
Hideyuki Tachibana, Katsuya Uenoyama, and Shunsuke Aihara. 2018. Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 4784--4788.
[16]
Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135 (2017).
[17]
A. Zhang. 2017. Speech Recognition (Version 3.8) [Software]. PyPI https://pypi.org/project/SpeechRecognition/ (2017). https://pypi.org/project/SpeechRecognition/

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AI4TV '20: Proceedings of the 2nd International Workshop on AI for Smart TV Content Production, Access and Delivery
October 2020
50 pages
ISBN:9781450381468
DOI:10.1145/3422839
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. digital storytelling
  2. neural style transfer, voice mimicking, personalized storytelling

Qualifiers

  • Research-article

Conference

MM '20
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)2
Reflects downloads up to 26 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media