IEEE Circuits and Systems Magazine - Q4 2019 - 37
[35] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 770-778.
[36] G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of
data with neural networks," Science, vol. 313, no. 5786, pp. 504-507, 2006.
[37] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, "Extracting
and composing robust features with denoising autoencoders," in Proc.
25th Int. Conf. Machine Learning. ACM, 2008, pp. 1096-1103.
[38] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol,
"Stacked denoising autoencoders: Learning useful representations in a
deep network with a local denoising criterion," J. Mach. Learn. Res., vol.
11, no. Dec, pp. 3371-3408, 2010.
[39] A. Makhzani and B. Frey, "K-sparse autoencoders," arXiv Preprint,
arXiv:1312.5663, 2013.
[40] D. P. Kingma and M. Welling, "Auto-encoding variational Bayes,"
arXiv Preprint, arXiv:1312.6114, 2013.
[41] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, "Contractive
auto-encoders: Explicit invariance during feature extraction," in Proc.
28th Int. Conf. Machine Learning. Omnipress, 2011, pp. 833-840.
[42] L. Deng and N. Jaitly, "Deep discriminative and generative models
for speech pattern recognition," in Handbook of Pattern Recognition and
Computer Vision. New York: World Scientific, 2016, pp. 27-52.
[43] I. Goodfellow et al., "Generative adversarial nets," in Proc. Advances in Neural Information Processing Systems, 2014, pp. 2672-2680.
[44] L. Deng and D. Yu, "Deep learning: Methods and applications,"
Foundations Trends® Signal Process., vol. 7, no. 3-4, pp. 197-387, 2014.
[45] Z.-H. Ling, L. Deng, and D. Yu, "Modeling spectral envelopes using
restricted Boltzmann machines and deep belief networks for statistical parametric speech synthesis," IEEE Trans. Audio, Speech, Lang. Process., vol. 21, no. 10, pp. 2129-2139, 2013.
[46] S. Kang, X. Qian, and H. Meng, "Multi-distribution deep belief
network for speech synthesis," in Proc. 2013 IEEE Int. Conf. Acoustics,
Speech and Signal Processing (ICASSP), pp. 8012-8016.
[47] H. Ze, A. Senior, and M. Schuster, "Statistical parametric speech
synthesis using deep neural networks," in Proc. 2013 IEEE Int. Conf.
Acoustics, Speech and Signal Processing (ICASSP), pp. 7962-7966.
[48] L.-J. Liu, L.-H. Chen, Z.-H. Ling, and L.-R. Dai, "Using bidirectional
associative memories for joint spectral envelope modeling in voice
conversion," in Proc. 2014 IEEE Int. Conf. Acoustics, Speech and Signal
Processing (ICASSP), pp. 7884-7888.
[49] S. H. Mohammadi and A. Kain, "A voice conversion mapping function based on a stacked joint-autoencoder," in Proc. INTERSPEECH,
2016, pp. 1647-1651.
[50] J. Colonel, C. Curro, and S. Keene, "Improving neural net auto encoders for music synthesis," in Audio Engineering Society Convention
143. Audio Engineering Society, 2017.
[51] A. M. Sarroff and M. A. Casey, "Musical audio synthesis using autoencoding neural nets," in Proc. ICMC, 2014.
[52] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep Learning,
vol. 1. Cambridge, MA: MIT Press, 2016.
[53] J. Gu et al., "Recent advances in convolutional neural networks,"
Pattern Recog., vol. 77, pp. 354-377, 2018.
[54] M. Lin, Q. Chen, and S. Yan, "Network in network," arXiv Preprint,
arXiv:1312.4400, 2013.
[55] Z. C. Lipton, J. Berkowitz, and C. Elkan, "A critical review of recurrent neural networks for sequence learning," arXiv Preprint, arXiv:1506.00019, 2015.
[56] X. Glorot, A. Bordes, and Y. Bengio, "Deep sparse rectifier neural
networks," in Proc. 14th Int. Conf. Artificial Intelligence and Statistics,
2011, pp. 315-323.
[57] V. Nair and G. E. Hinton, "Rectified linear units improve restricted
Boltzmann machines," in Proc. 27th Int. Conf. Machine Learning (ICML-10),
2010, pp. 807-814.
[58] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Comput., vol. 9, no. 8, pp. 1735-1780, 1997.
[59] F. A. Gers, J. Schmidhuber, and F. Cummins, "Learning to forget:
Continual prediction with LSTM," 1999.
[60] K. Cho et al., "Learning phrase representations using RNN encoder-decoder for statistical machine translation," arXiv Preprint, arXiv:1406.1078, 2014.
[61] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Empirical evaluation
of gated recurrent neural networks on sequence modeling," arXiv Preprint, arXiv:1412.3555, 2014.
FOURTH QUARTER 2019
[62] R. Dey and F. M. Salemt, "Gate-variants of gated recurrent unit
(GRU) neural networks," in Proc. 2017 IEEE 60th Int. Midwest Symp. Circuits and Systems (MWSCAS), pp. 1597-1600.
[63] Z.-H. Ling et al., "Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and
future trends," IEEE Signal Process. Mag., vol. 32, no. 3, pp. 35-52, 2015.
[64] H. Zen and A. Senior, "Deep mixture density networks for acoustic
modeling in statistical parametric speech synthesis," in Proc. 2014 IEEE
Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), pp. 3844-3848.
[65] J. Lorenzo-Trueba, G. E. Henter, S. Takaki, J. Yamagishi, Y. Morino,
and Y. Ochiai, "Investigating different representations for modeling and
controlling multiple emotions in DNN-based speech synthesis," Speech
Commun., vol. 99, pp. 135-143, 2018.
[66] J. Chorowski, R. J. Weiss, R. A. Saurous, and S. Bengio, "On using
backpropagation for speech texture generation and voice conversion,"
in Proc. 2018 IEEE Int. Conf. Acoustics, Speech and Signal Processing
(ICASSP), Apr. 2018, pp. 2256-2260.
[67] Z. Wu and S. King, "Investigating gated recurrent networks for
speech synthesis," in Proc. 2016 IEEE Int. Conf. Acoustics, Speech and
Signal Processing (ICASSP), Mar. 2016, pp. 5140-5144.
[68] B. Li and H. Zen, "Multi-language multi-speaker acoustic modeling
for LSTM-RNN based statistical parametric speech synthesis," in Proc.
INTERSPEECH, 2016, pp. 2468-2472.
[69] S. Mehri et al., "SampleRNN: An unconditional end-to-end neural
audio generation model," arXiv Preprint, arXiv:1612.07837, 2016.
[70] Y. Fan, Y. Qian, F.-L. Xie, and F. K. Soong, "TTS synthesis with bidirectional LSTM based recurrent neural networks," in Proc. 15th Annu.
Conf. Int. Speech Communication Association, 2014.
[71] L. Sun, S. Kang, K. Li, and H. Meng, "Voice conversion using deep bidirectional long short-term memory based recurrent neural networks,"
in Proc. 2015 IEEE Int. Conf. Acoustics, Speech and Signal Processing
(ICASSP), pp. 4869-4873.
[72] H. Zhu et al., "Xiaoice Band: A melody and arrangement generation
framework for pop music," in Proc. 24th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining. ACM, 2018, pp. 2837-2846.
[73] D. D. Johnson, "Generating polyphonic music using tied parallel
networks," in Proc. Int. Conf. Evolutionary and Biologically Inspired Music
and Art. Springer-Verlag, 2017, pp. 128-143.
[74] G. Brunner, Y. Wang, R. Wattenhofer, and J. Wiesendanger, "JamBot:
Music theory aware chord based generation of polyphonic music with
LSTMs," in Proc. 2017 IEEE 29th Int. Conf. Tools with Artificial Intelligence
(ICTAI), pp. 519-526.
[75] M. Blaauw and J. Bonada, "A singing synthesizer based on pixelCNN," in María de Maeztu Seminar on Music Knowledge Extraction Using Machine Learning (Collocated with NIPS). 2016. Accessed on: Oct.
1, 2017. [Online]. Available: http://www.dtic.upf.edu/~mblaauw/MdM
_NIPS_seminar/
[76] A. van den Oord et al., "Conditional image generation with pixelCNN decoders," in Proc. Advances in Neural Information Processing Systems, 2016, pp. 4790-4798.
[77] M. Nishimura, K. Hashimoto, K. Oura, Y. Nankaku, and K. Tokuda,
"Singing voice synthesis based on deep neural networks," in Proc. INTERSPEECH, 2016, pp. 2478-2482.
[78] E. Gómez, M. Blaauw, J. Bonada, P. Chandna, and H. Cuesta, "Deep
learning for singing processing: Achievements, challenges and impact
on singers and listeners," arXiv Preprint, arXiv:1807.03046, 2018.
[79] D. Makris, M. Kaliakatsos-Papakostas, I. Karydis, and K. L. Kermanidis, "Combining LSTM and feed forward neural networks for conditional rhythm composition," in Proc. Int. Conf. Engineering Applications
of Neural Networks. Springer-Verlag, 2017, pp. 570-582.
[80] Y. Bengio, A. Courville, and P. Vincent, "Representation learning:
A review and new perspectives," IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 35, no. 8, pp. 1798-1828, 2013.
[81] T. N. Sainath, B. Kingsbury, and B. Ramabhadran, "Auto-encoder
bottleneck features using deep belief networks," in Proc. 2012 IEEE Int.
Conf. Acoustics, Speech and Signal Processing (ICASSP), pp. 4153-4156.
[82] K. Janod et al., "Denoised bottleneck features from deep autoencoders for telephone conversation analysis," IEEE/ACM Trans. Audio,
Speech Lang. Process. (TASLP), vol. 25, no. 9, pp. 1809-1820, 2017.
[83] S. Takaki and J. Yamagishi, "A deep auto-encoder based low-dimensional feature extraction from FFT spectral envelopes for statistical
parametric speech synthesis," in Proc. 2016 IEEE Int. Conf. Acoustics,
Speech and Signal Processing (ICASSP), pp. 5535-5539.
IEEE CIRCUITS AND SYSTEMS MAGAZINE
37
http://www.dtic.upf.edu/~mblaauw/MdM_NIPS_seminar/
http://www.dtic.upf.edu/~mblaauw/MdM_NIPS_seminar/
IEEE Circuits and Systems Magazine - Q4 2019
Table of Contents for the Digital Edition of IEEE Circuits and Systems Magazine - Q4 2019
Contents
IEEE Circuits and Systems Magazine - Q4 2019 - Cover1
IEEE Circuits and Systems Magazine - Q4 2019 - Cover2
IEEE Circuits and Systems Magazine - Q4 2019 - 1
IEEE Circuits and Systems Magazine - Q4 2019 - Contents
IEEE Circuits and Systems Magazine - Q4 2019 - 3
IEEE Circuits and Systems Magazine - Q4 2019 - 4
IEEE Circuits and Systems Magazine - Q4 2019 - 5
IEEE Circuits and Systems Magazine - Q4 2019 - 6
IEEE Circuits and Systems Magazine - Q4 2019 - 7
IEEE Circuits and Systems Magazine - Q4 2019 - 8
IEEE Circuits and Systems Magazine - Q4 2019 - 9
IEEE Circuits and Systems Magazine - Q4 2019 - 10
IEEE Circuits and Systems Magazine - Q4 2019 - 11
IEEE Circuits and Systems Magazine - Q4 2019 - 12
IEEE Circuits and Systems Magazine - Q4 2019 - 13
IEEE Circuits and Systems Magazine - Q4 2019 - 14
IEEE Circuits and Systems Magazine - Q4 2019 - 15
IEEE Circuits and Systems Magazine - Q4 2019 - 16
IEEE Circuits and Systems Magazine - Q4 2019 - 17
IEEE Circuits and Systems Magazine - Q4 2019 - 18
IEEE Circuits and Systems Magazine - Q4 2019 - 19
IEEE Circuits and Systems Magazine - Q4 2019 - 20
IEEE Circuits and Systems Magazine - Q4 2019 - 21
IEEE Circuits and Systems Magazine - Q4 2019 - 22
IEEE Circuits and Systems Magazine - Q4 2019 - 23
IEEE Circuits and Systems Magazine - Q4 2019 - 24
IEEE Circuits and Systems Magazine - Q4 2019 - 25
IEEE Circuits and Systems Magazine - Q4 2019 - 26
IEEE Circuits and Systems Magazine - Q4 2019 - 27
IEEE Circuits and Systems Magazine - Q4 2019 - 28
IEEE Circuits and Systems Magazine - Q4 2019 - 29
IEEE Circuits and Systems Magazine - Q4 2019 - 30
IEEE Circuits and Systems Magazine - Q4 2019 - 31
IEEE Circuits and Systems Magazine - Q4 2019 - 32
IEEE Circuits and Systems Magazine - Q4 2019 - 33
IEEE Circuits and Systems Magazine - Q4 2019 - 34
IEEE Circuits and Systems Magazine - Q4 2019 - 35
IEEE Circuits and Systems Magazine - Q4 2019 - 36
IEEE Circuits and Systems Magazine - Q4 2019 - 37
IEEE Circuits and Systems Magazine - Q4 2019 - 38
IEEE Circuits and Systems Magazine - Q4 2019 - 39
IEEE Circuits and Systems Magazine - Q4 2019 - 40
IEEE Circuits and Systems Magazine - Q4 2019 - 41
IEEE Circuits and Systems Magazine - Q4 2019 - 42
IEEE Circuits and Systems Magazine - Q4 2019 - 43
IEEE Circuits and Systems Magazine - Q4 2019 - 44
IEEE Circuits and Systems Magazine - Q4 2019 - 45
IEEE Circuits and Systems Magazine - Q4 2019 - 46
IEEE Circuits and Systems Magazine - Q4 2019 - 47
IEEE Circuits and Systems Magazine - Q4 2019 - 48
IEEE Circuits and Systems Magazine - Q4 2019 - 49
IEEE Circuits and Systems Magazine - Q4 2019 - 50
IEEE Circuits and Systems Magazine - Q4 2019 - 51
IEEE Circuits and Systems Magazine - Q4 2019 - 52
IEEE Circuits and Systems Magazine - Q4 2019 - 53
IEEE Circuits and Systems Magazine - Q4 2019 - 54
IEEE Circuits and Systems Magazine - Q4 2019 - 55
IEEE Circuits and Systems Magazine - Q4 2019 - 56
IEEE Circuits and Systems Magazine - Q4 2019 - 57
IEEE Circuits and Systems Magazine - Q4 2019 - 58
IEEE Circuits and Systems Magazine - Q4 2019 - 59
IEEE Circuits and Systems Magazine - Q4 2019 - 60
IEEE Circuits and Systems Magazine - Q4 2019 - 61
IEEE Circuits and Systems Magazine - Q4 2019 - 62
IEEE Circuits and Systems Magazine - Q4 2019 - 63
IEEE Circuits and Systems Magazine - Q4 2019 - 64
IEEE Circuits and Systems Magazine - Q4 2019 - 65
IEEE Circuits and Systems Magazine - Q4 2019 - 66
IEEE Circuits and Systems Magazine - Q4 2019 - 67
IEEE Circuits and Systems Magazine - Q4 2019 - 68
IEEE Circuits and Systems Magazine - Q4 2019 - Cover3
IEEE Circuits and Systems Magazine - Q4 2019 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021Q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q1
https://www.nxtbookmedia.com