Stargan-vc: Non-parallel many-to-many voice conversion using star generative adversarial networks H Kameoka, T Kaneko, K Tanaka, N Hojo 2018 IEEE Spoken Language Technology Workshop (SLT), 266-273, 2018 | 145 | 2018 |
Stargan-vc: Non-parallel many-to-many voice conversion using star generative adversarial networks H Kameoka, T Kaneko, K Tanaka, N Hojo 2018 IEEE Spoken Language Technology Workshop (SLT), 266-273, 2018 | 145 | 2018 |
Generative adversarial network-based postfilter for statistical parametric speech synthesis T Kaneko, H Kameoka, N Hojo, Y Ijima, K Hiramatsu, K Kashino 2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017 | 111 | 2017 |
Cyclegan-vc2: Improved cyclegan-based non-parallel voice conversion T Kaneko, H Kameoka, K Tanaka, N Hojo ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019 | 72 | 2019 |
AttS2S-VC: Sequence-to-sequence voice conversion with attention and context preservation mechanisms K Tanaka, H Kameoka, T Kaneko, N Hojo ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019 | 49 | 2019 |
An Investigation of DNN-Based Speech Synthesis Using Speaker Codes. N Hojo, Y Ijima, H Mizuno INTERSPEECH, 2278-2282, 2016 | 48 | 2016 |
ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder H Kameoka, T Kaneko, K Tanaka, N Hojo arXiv preprint arXiv:1808.05092, 2018 | 40 | 2018 |
StarGAN-VC2: Rethinking conditional methods for StarGAN-based voice conversion T Kaneko, H Kameoka, K Tanaka, N Hojo arXiv preprint arXiv:1907.12279, 2019 | 37 | 2019 |
DNN-based speech synthesis using speaker codes N Hojo, Y Ijima, H Mizuno IEICE TRANSACTIONS on Information and Systems 101 (2), 462-472, 2018 | 35 | 2018 |
ACVAE-VC: Non-parallel voice conversion with auxiliary classifier variational autoencoder H Kameoka, T Kaneko, K Tanaka, N Hojo IEEE/ACM Transactions on Audio, Speech, and Language Processing 27 (9), 1432 …, 2019 | 24 | 2019 |
An investigation to transplant emotional expressions in DNN-based TTS synthesis K Inoue, S Hara, M Abe, N Hojo, Y Ijima 2017 Asia-Pacific Signal and Information Processing Association Annual …, 2017 | 24 | 2017 |
Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks K Tanaka, T Kaneko, N Hojo, H Kameoka 2018 IEEE Spoken Language Technology Workshop (SLT), 632-639, 2018 | 22 | 2018 |
ConvS2S-VC: Fully convolutional sequence-to-sequence voice conversion H Kameoka, K Tanaka, D Kwasny, T Kaneko, N Hojo arXiv preprint arXiv:1811.01609, 2018 | 19 | 2018 |
ConvS2S-VC: Fully convolutional sequence-to-sequence voice conversion H Kameoka, K Tanaka, D Kwasny, T Kaneko, N Hojo arXiv preprint arXiv:1811.01609, 2018 | 19 | 2018 |
Generative adversarial network-based approach to signal reconstruction from magnitude spectrogram K Oyamada, H Kameoka, T Kaneko, K Tanaka, N Hojo, H Ando 2018 26th European Signal Processing Conference (EUSIPCO), 2514-2518, 2018 | 16 | 2018 |
Wavecyclegan2: Time-domain neural post-filter for speech waveform generation K Tanaka, H Kameoka, T Kaneko, N Hojo arXiv preprint arXiv:1904.02892, 2019 | 9 | 2019 |
WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks K Tanaka, T Kaneko, N Hojo, H Kameoka arXiv preprint arXiv:1809.10288, 2018 | 5 | 2018 |
Prosody Aware Word-Level Encoder Based on BLSTM-RNNs for DNN-Based Speech Synthesis. Y Ijima, N Hojo, R Masumura, T Asami INTERSPEECH, 764-768, 2017 | 5 | 2017 |
Many-to-many voice transformer network H Kameoka, WC Huang, K Tanaka, T Kaneko, N Hojo, T Toda IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020 | 4 | 2020 |
Evaluating Intention Communication by TTS Using Explicit Definitions of Illocutionary Act Performance N Hojo, N Miyazaki Proc. Interspeech 2019, 1536-1540, 2019 | 3 | 2019 |