IEEE Circuits and Systems Magazine - Q4 2019 - 38

[84] Y. Pu et al., "Variational autoencoder for deep learning of images,
labels and captions," in Proc. Advances in Neural Information Processing
Systems, 2016, pp. 2352-2360.
[85] J. Walker, C. Doersch, A. Gupta, and M. Hebert, "An uncertain future:
Forecasting from static images using variational autoencoders," in Proc.
European Conf. Computer Vision. Springer-Verlag, 2016, pp. 835-851.
[86] K. Sohn, H. Lee, and X. Yan, "Learning structured output representation using deep conditional generative models," in Proc. Advances in
Neural Information Processing Systems, 2015, pp. 3483-3491.
[87] G. E. Hinton, "A practical guide to training restricted Boltzmann
machines," in Neural Networks: Tricks of the Trade. New York: SpringerVerlag, 2012, pp. 599-619.
[88] A. van den Oord et al., "WaveNet: A generative model for raw audio," in Proc. 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, Sept.
13-15, 2016, p. 125.
[89] A. v d Oord et al., "Parallel WaveNet: Fast high-fidelity speech synthesis," arXiv Preprint, arXiv:1711.10433, 2017.
[90] S. Ö. Arık et al., "Deep voice: Real-time neural text-to-speech," in
Proc. 34th Int. Conf. Machine Learning, Machine Learning Research, vol.
70. D. Precup and Y. W. Teh, Eds. Sydney, Australia: PMLR, International
Convention Centre, Aug. 6-11, 2017, pp. 195-204.
[91] A. Gibiansky et al., "Deep voice 2: Multi-speaker neural text-tospeech," in Advances in Neural Information Processing Systems 30, I.
Guyon et al., Eds. Curran Associates, 2017, pp. 2962-2970.
[92] W. Ping et al., "Deep voice 3: 2000-speaker neural text-to-speech,"
arXiv Preprint, arXiv:1710.07654, 2017.
[93] Y. Wang et al., "Tacotron: Towards end-to-end speech synthesis," in
Proc. INTERSPEECH, 2017, pp. 4006-4010.
[94] J. Shen et al., "Natural TTS synthesis by conditioning WaveNet on
Mel spectrogram predictions," in Proc. 2018 IEEE Int. Conf. Acoustics,
Speech and Signal Processing (ICASSP), Apr. 2018, pp. 4779-4783.
[95] H. Choi, J. Kim, J. Park, J. Kim, and M. Hahn, "Low-dimensional representation of spectral envelope using deep auto-encoder for speech
synthesis," in Proc. 2018 2nd Int. Conf. Mechatronics Systems and Control
Engineering. ACM, pp. 107-111.
[96] V. Wan, Y. Agiomyrgiannakis, H. Silen, and J. Vt, "Google's next-generation real-time unit-selection synthesizer using sequence-to-sequence
LSTM-based autoencoders," in Proc. Interspeech 2017, pp. 1143-1147.
[97] Y.-J. Hu and Z.-H. Ling, "Extracting spectral features using deep
autoencoders with binary distributed hidden units for statistical parametric speech synthesis," IEEE/ACM Trans. Audio, Speech Lang. Process.
(TASLP), vol. 26, no. 4, pp. 713-724, 2018.
[98] K. Kobayashi, T. Hayashi, A. Tamamori, and T. Toda, "Statistical
voice conversion with WaveNet-based waveform generation," in Proc.
Interspeech, vol. 2017. 2017, pp. 1138-1142.
[99] R. Manzelli, V. Thakkar, A. Siahkamari, and B. Kulis, "An end to end
model for automatic music generation: Combining deep raw and symbolic audio networks," in Proc. Musical Metacreation Workshop at 9th Int.
Conf. Computational Creativity, Salamanca, Spain, 2018.
[100] R. Manzelli, V. Thakkar, A. Siahkamari, and B. Kulis, "Conditioning deep generative raw audio models for structured automatic music,"
arXiv Preprint, arXiv:1806.09905, 2018.
[101] J. Engel et al., "Neural audio synthesis of musical notes with
WaveNet autoencoders," in Proc. 34th Int. Conf. Machine Learning, Machine Learning Research, vol. 70. D. Precup and Y. W. Teh, Eds. PMLR,
Aug. 6-11 2017, pp. 1068-1077.
[102] S. Dieleman, A. v d Oord, and K. Simonyan, "The challenge of realistic music generation: Modelling raw audio at scale," arXiv Preprint,
arXiv:1806.10474, 2018.
[103] M. Blaauw and J. Bonada, "A neural parametric singing synthesizer modeling timbre and expression from natural songs," Appl. Sci.,
vol. 7, no. 12, p. 1313, 2017.
[104] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and
X. Chen, "Improved techniques for training GANs," in Proc. Advances in
Neural Information Processing Systems, 2016, pp. 2234-2242.
[105] A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv Preprint, arXiv:1511.06434, 2015.
[106] A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski,
"Plug & play generative networks: Conditional iterative generation of
images in latent space," CVPR, vol. 2, no. 5, p. 7, 2017.
[107] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee,
"Generative adversarial text to image synthesis," in Proc. 33rd Int.

38

IEEE CIRCUITS AND SYSTEMS MAGAZINE

Conf. Machine Learning, Machine Learning Research, vol. 48. M. F. Balcan and K. Q. Weinberger, Eds. New York: PMLR, June 20-22, 2016, pp.
1060-1069.
[108] A. Odena, C. Olah, and J. Shlens, "Conditional image synthesis
with auxiliary classifier GANs," in Proc. 34th Int. Conf. Machine Learning, Machine Learning Research, vol. 70. D. Precup and Y. W. Teh, Eds.
Sydney, Australia: PMLR, International Convention Centre, Aug. 6-11
2017, pp. 2642-2651.
[109] W. Cai, A. Doshi, and R. Valle, "Attacking speaker recognition
with deep generative models," arXiv Preprint, arXiv:1801.02384,
2018.
[110] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), July 2017.
[111] M.-Y. Liu and O. Tuzel, "Coupled generative adversarial networks,"
in Proc. Advances in Neural Information Processing Systems, 2016, pp.
469-477.
[112] P. Luc, C. Couprie, S. Chintala, and J. Verbeek, "Semantic segmentation using adversarial networks," in Proc. NIPS Workshop Adversarial
Training, 2016.
[113] W. Zhu, X. Xiang, T. D. Tran, and X. Xie, "Adversarial deep structural networks for mammographic mass segmentation," arXiv Preprint,
arXiv:1612.05970, 2016.
[114] X. Wang, A. Shrivastava, and A. Gupta, "A-fast-RCNN: Hard positive generation via adversary for object detection," in Proc. IEEE Conf.
Computer Vision and Pattern Recognition (CVPR), 2017.
[115] J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan, "Perceptual generative adversarial networks for small object detection," in Proc. IEEE Conf.
Computer Vision and Pattern Recognition (CVPR), 2017.
[116] M. Arjovsky and L. Bottou, "Towards principled methods for training generative adversarial networks," arXiv Preprint, arXiv:1701.04862,
2017.
[117] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein, "Unrolled generative adversarial networks," arXiv Preprint, arXiv:1611.02163, 2016.
[118] M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein generative
adversarial networks," in Proc. 34th Int. Conf. Machine Learning, Machine Learning Research, vol. 70. D. Precup and Y. W. Teh, Eds. Sydney,
Australia: PMLR, International Convention Centre, Aug. 6-11, 2017, pp.
214-223.
[119] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, "Improved training of Wasserstein GANs," in Proc. Advances in
Neural Information Processing Systems, 2017, pp. 5767-5777.
[120] A. Odena, "Semi-supervised learning with generative adversarial
networks," arXiv Preprint, arXiv:1606.01583, 2016.
[121] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune, "Synthesizing the preferred inputs for neurons in neural networks via deep
generator networks," in Proc. Advances in Neural Information Processing
Systems, 2016, pp. 3387-3395.
[122] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-toimage translation using cycle-consistent adversarial networks," in Proc.
IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2017.
[123] C. Donahue, J. McAuley, and M. Puckette, "Synthesizing audio with
generative adversarial networks," arXiv Preprint, arXiv:1802.04208,
2018.
[124] J. Lorenzo-Trueba, F. Fang, X. Wang, I. Echizen, J. Yamagishi, and
T. Kinnunen, "Can we steal your vocal identity from the internet? Initial
investigation of cloning Obamas voice using GAN, WaveNet and lowquality found data," in Proc. Odyssey 2018 The Speaker and Language
Recognition Workshop, pp. 240-247.
[125] L. Yang, S. Chou, and Y. Yang, "MidiNet: A convolutional generative adversarial network for symbolic-domain music generation," in
Proc. 18th Int. Society for Music Information Retrieval Conf., ISMIR 2017,
Suzhou, China, Oct. 23-27, 2017, pp. 324-331.
[126] H.-W. Dong, W.-Y. Hsiao, L.-C. Yang, and Y.-H. Yang, "MuseGAN:
Multi-track sequential generative adversarial networks for symbolic
music generation and accompaniment," 2018. [Online]. Available:
https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17286
[127] S. Mun, S. Park, D. K. Han, and H. Ko, "Generative adversarial network based acoustic scene training set augmentation and selection using SVM hyper-plane," in Proc. DCASE, 2017, pp. 93-97.
[128] S. Sabour, N. Frosst, and G. E. Hinton, "Dynamic routing between
capsules," in Proc. Advances in Neural Information Processing Systems,
2017, pp. 3856-3866.

FOURTH QUARTER 2019


https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17286

IEEE Circuits and Systems Magazine - Q4 2019

Table of Contents for the Digital Edition of IEEE Circuits and Systems Magazine - Q4 2019

Contents
IEEE Circuits and Systems Magazine - Q4 2019 - Cover1
IEEE Circuits and Systems Magazine - Q4 2019 - Cover2
IEEE Circuits and Systems Magazine - Q4 2019 - 1
IEEE Circuits and Systems Magazine - Q4 2019 - Contents
IEEE Circuits and Systems Magazine - Q4 2019 - 3
IEEE Circuits and Systems Magazine - Q4 2019 - 4
IEEE Circuits and Systems Magazine - Q4 2019 - 5
IEEE Circuits and Systems Magazine - Q4 2019 - 6
IEEE Circuits and Systems Magazine - Q4 2019 - 7
IEEE Circuits and Systems Magazine - Q4 2019 - 8
IEEE Circuits and Systems Magazine - Q4 2019 - 9
IEEE Circuits and Systems Magazine - Q4 2019 - 10
IEEE Circuits and Systems Magazine - Q4 2019 - 11
IEEE Circuits and Systems Magazine - Q4 2019 - 12
IEEE Circuits and Systems Magazine - Q4 2019 - 13
IEEE Circuits and Systems Magazine - Q4 2019 - 14
IEEE Circuits and Systems Magazine - Q4 2019 - 15
IEEE Circuits and Systems Magazine - Q4 2019 - 16
IEEE Circuits and Systems Magazine - Q4 2019 - 17
IEEE Circuits and Systems Magazine - Q4 2019 - 18
IEEE Circuits and Systems Magazine - Q4 2019 - 19
IEEE Circuits and Systems Magazine - Q4 2019 - 20
IEEE Circuits and Systems Magazine - Q4 2019 - 21
IEEE Circuits and Systems Magazine - Q4 2019 - 22
IEEE Circuits and Systems Magazine - Q4 2019 - 23
IEEE Circuits and Systems Magazine - Q4 2019 - 24
IEEE Circuits and Systems Magazine - Q4 2019 - 25
IEEE Circuits and Systems Magazine - Q4 2019 - 26
IEEE Circuits and Systems Magazine - Q4 2019 - 27
IEEE Circuits and Systems Magazine - Q4 2019 - 28
IEEE Circuits and Systems Magazine - Q4 2019 - 29
IEEE Circuits and Systems Magazine - Q4 2019 - 30
IEEE Circuits and Systems Magazine - Q4 2019 - 31
IEEE Circuits and Systems Magazine - Q4 2019 - 32
IEEE Circuits and Systems Magazine - Q4 2019 - 33
IEEE Circuits and Systems Magazine - Q4 2019 - 34
IEEE Circuits and Systems Magazine - Q4 2019 - 35
IEEE Circuits and Systems Magazine - Q4 2019 - 36
IEEE Circuits and Systems Magazine - Q4 2019 - 37
IEEE Circuits and Systems Magazine - Q4 2019 - 38
IEEE Circuits and Systems Magazine - Q4 2019 - 39
IEEE Circuits and Systems Magazine - Q4 2019 - 40
IEEE Circuits and Systems Magazine - Q4 2019 - 41
IEEE Circuits and Systems Magazine - Q4 2019 - 42
IEEE Circuits and Systems Magazine - Q4 2019 - 43
IEEE Circuits and Systems Magazine - Q4 2019 - 44
IEEE Circuits and Systems Magazine - Q4 2019 - 45
IEEE Circuits and Systems Magazine - Q4 2019 - 46
IEEE Circuits and Systems Magazine - Q4 2019 - 47
IEEE Circuits and Systems Magazine - Q4 2019 - 48
IEEE Circuits and Systems Magazine - Q4 2019 - 49
IEEE Circuits and Systems Magazine - Q4 2019 - 50
IEEE Circuits and Systems Magazine - Q4 2019 - 51
IEEE Circuits and Systems Magazine - Q4 2019 - 52
IEEE Circuits and Systems Magazine - Q4 2019 - 53
IEEE Circuits and Systems Magazine - Q4 2019 - 54
IEEE Circuits and Systems Magazine - Q4 2019 - 55
IEEE Circuits and Systems Magazine - Q4 2019 - 56
IEEE Circuits and Systems Magazine - Q4 2019 - 57
IEEE Circuits and Systems Magazine - Q4 2019 - 58
IEEE Circuits and Systems Magazine - Q4 2019 - 59
IEEE Circuits and Systems Magazine - Q4 2019 - 60
IEEE Circuits and Systems Magazine - Q4 2019 - 61
IEEE Circuits and Systems Magazine - Q4 2019 - 62
IEEE Circuits and Systems Magazine - Q4 2019 - 63
IEEE Circuits and Systems Magazine - Q4 2019 - 64
IEEE Circuits and Systems Magazine - Q4 2019 - 65
IEEE Circuits and Systems Magazine - Q4 2019 - 66
IEEE Circuits and Systems Magazine - Q4 2019 - 67
IEEE Circuits and Systems Magazine - Q4 2019 - 68
IEEE Circuits and Systems Magazine - Q4 2019 - Cover3
IEEE Circuits and Systems Magazine - Q4 2019 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021Q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q1
https://www.nxtbookmedia.com