IEEE Circuits and Systems Magazine - Q3 2023 - 53

[9] S. Bianco et al., " Benchmark analysis of representative deep neural
network architectures, " IEEE Access, vol. 6, pp. 64270-64 277, 2018.
[10] Y. Guo et al., " Deep learning for 3D point clouds: A survey, " IEEE
Trans. Pattern Anal. Mach. Intell., vol. 43, no. 12, pp. 4338-4364, Dec. 2021.
[11] G. Menghani, " Efficient deep learning: A survey on making deep
learning models smaller, faster, and better, " 2021, arXiv:2106.08962.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, " ImageNet classification
with deep convolutional neural networks, " in Proc. Adv. Neural Inf.
Process. Syst. (NIPS), 2012, pp. 1097-1105.
[13] N. P. Jouppi et al., " In-datacenter performance analysis of a tensor
processing unit, " in Proc. Int. Symp. Comput. Archit. (ISCA), 2017, pp. 1-12.
[14] K. Simonyan and A. Zisserman, " Very deep convolutional networks
for large-scale image recognition, " in Proc. Int. Conf. Learn. Represent.
(ICLR), 2015, pp. 1-14.
[15] A. G. Howard et al., " MobileNets: Efficient convolutional neural networks
for mobile vision applications, " 2017, arXiv:1704.04861.
[16] R. Girshick et al., " Rich feature hierarchies for accurate object detection
and semantic segmentation, " in Proc. Conf. Comput. Vis. Pattern
Recognit. (CVPR), Jun. 2014, pp. 580-587.
[17] J. Redmon et al., " You only look once: Unified, real-time object detection, "
in Proc. Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016,
pp. 779-788.
[18] J. Long, E. Shelhamer, and T. Darrell, " Fully convolutional networks
for semantic segmentation, " in Proc. Conf. Comput. Vis. Pattern Recognit.
(CVPR), Jun. 2015, pp. 3431-3440.
[19] R. Krashinsky et al. Nvidia Ampere Architecture In-Depth. Accessed:
Dec. 10, 2022. [Online]. Available: https://developer.nvidia.com/blog/
nvidia-ampere-architecture-in-depth/
[20] Get Outstanding Computational Performance Without a Specialized
Accelerator. Accessed: Dec. 10, 2022. [Online]. Available: https://www.
intel.com/content/www/us/en/architecture-andtechnology/avx-512-solution-brief.html
[21]
V. Sze et al., " Efficient processing of deep neural networks: A tutorial
and survey, " Proc. IEEE, vol. 105, no. 12, pp. 2295-2329, Dec. 2017.
[22] Z. Du et al., " ShiDianNao: Shifting vision processing closer to the
sensor, " in Proc. Int. Symp. Comput. Archit. (ISCA), Jun. 2015, pp. 92-104.
[23] C. Deng et al., " TIE: Energyefficient tensor train-based inference engine
for deep neural network, " in Proc. Int. Symp. Comput. Archit. (ISCA),
Jun. 2019, pp. 264-277.
[24] S. Han et al., " Learning both weights and connections for efficient
neural network, " in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2015,
pp. 1135-1143.
[25] S. Han, H. Mao, and W. J. Dally, " Deep compression: Compressing
deep neural networks with pruning, trained quantization and Huffman
coding, " in Proc. Int. Conf. Learn. Represent. (ICLR), 2016, pp. 1-14.
[26] T. Zhang et al., " A systematic DNN weight pruning framework using
alternating direction method of multipliers, " in Proc. Eur. Conf. Comput.
Vis. (ECCV), 2018, pp. 184-199.
[27] S. Anwar, K. Hwang, and W. Sung, " Structured pruning of deep convolutional
neural networks, " ACM J. Emerg. Technol. Comput. Syst., vol.
13, no. 3, pp. 1-18, 2017.
[28] S. Dave et al., " Hardware acceleration of sparse and irregular tensor
computations of ML models: A survey and insights, " Proc. IEEE, vol.
109, no. 10, pp. 1706-1752, Oct. 2021.
[29] L. Deng et al., " Model compression and hardware acceleration for
neural networks: A comprehensive survey, " Proc. IEEE, vol. 108, no. 4,
pp. 485-532, Apr. 2020.
[30] J.-F. Zhang et al., " SNAP: A 1.67-21.55 TOPS/W sparse neural acceleration
processor for unstructured sparse deep neural network inference
in 16 nm CMOS, " in Proc. Symp. VLSI Circuits (VLSI), Jun. 2019,
pp. 306-307.
[31] J.-F. Zhang et al., " SNAP: An efficient sparse neural acceleration
processor for unstructured sparse deep neural network inference, "
IEEE J. Solid-State Circuits, vol. 56, no. 2, pp. 636-647, Feb. 2021.
[32] Y.-H. Chen et al., " Eyeriss: An energyefficient reconfigurable accelerator
for deep convolutional neural networks, " IEEE J. Solid-State
Circuits, vol. 52, no. 1, pp. 127-138, Jan. 2017.
[33] S. Han et al., " EIE: Efficient inference engine on compressed deep
neural network, " in Proc. Int. Symp. Comput. Archit. (ISCA), Jun. 2016,
pp. 243-254.
[34] A. Parashar et al., " SCNN: An accelerator for compressed-sparse
convolutional neural networks, " in Proc. Int. Symp. Comput. Archit.
(ISCA), Jun. 2017, pp. 27-40.
THIRD QUARTER 2023
[35] Z. Yuan et al., " STICKER: An energy-efficient multi-sparsity compatible
accelerator for convolutional neural networks in 65-nm CMOS, "
IEEE J. Solid-State Circuits, vol. 55, no. 2, pp. 465-477, Feb. 2020.
[36] Y.-H. Chen et al., " Eyeriss v2: A flexible accelerator for emerging
deep neural networks on mobile devices, " IEEE J. Emerg. Sel. Topics Circuits
Syst., vol. 9, no. 2, pp. 292-308, Jun. 2019.
[37] J. Albericio et al., " Cnvlutin: Ineffectual-neuron-free deep neural
network computing, " in Proc. Int. Symp. Comput. Archit. (ISCA), Jun.
2016, pp. 1-13.
[38] S. Zhang et al., " Cambricon-X: An accelerator for sparse neural
networks, " in Proc. Int. Symp. Microarchitecture (MICRO), Oct. 2016,
pp. 1-12.
[39] Y. Chen et al., " DaDianNao: A machine-learning supercomputer, "
in Proc. Int. Symp. Microarchitecture (MICRO), Dec. 2014, pp. 609-622.
[40] W. Wen et al., " Learning structured sparsity in deep neural networks, "
in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2016, pp. 1-9.
[41] Z.-G. Liu, P. N. Whatmough, and M. Mattina, " Systolic tensor array:
An efficient structured-sparse GEMM accelerator for mobile CNN
inference, " IEEE Comput. Archit. Lett., vol. 19, no. 1, pp. 34-37, Jan./Jun.
2020.
[42] J. Yu et al., " Scalpel: Customizing DNN pruning to the underlying
hardware parallelism, " in Proc. Int. Symp. Comput. Archit. (ISCA), Jun.
2017, pp. 548-560.
[43] S. Narang, E. Undersander, and G. Diamos, " Block-sparse recurrent
neural networks, " 2017, arXiv:1711.02782.
[44] Z. Liu et al., " Learning efficient convolutional networks through
network slimming, " in Proc. Int. Conf. Comput. Vis. (ICCV), Oct. 2017,
pp. 2755-2763.
[45] H. Li et al., " Pruning filters for efficient ConvNets, " 2016, arXiv:1608.08710.
[46]
J. Albericio et al., " Bit-pragmatic deep neural network computing, "
in Proc. Int. Symp. Microarchitecture (MICRO), Oct. 2017, pp. 382-394.
[47] S. Han et al., " ESE: Efficient speech recognition engine with sparse
LSTM on FPGA, " in Proc. Int. Symp. Field Program. Gate Arrays (FPGA),
Feb. 2017, pp. 75-84.
[48] H. Wang, Z. Zhang, and S. Han, " SpAtten: Efficient sparse attention
architecture with cascade token and head pruning, " in Proc. Int. Symp.
High-Perform. Comput. Archit. (HPCA), Feb./Mar. 2021, pp. 97-110.
[49] Z. Qu et al., " DOTA: Detect and omit weak attentions for scalable
Transformer acceleration, " in Proc. Int. Conf. Architectural Support Program.
Lang. Operation Systems (ASPLOS), Feb. 2022, pp. 14-26.
[50] N. P. Jouppi et al., " Ten lessons from three generations shaped
Google's TPUv4i: Industrial product, " in Proc. Int. Symp. Comput. Archit.
(ISCA), Jun. 2021, pp. 1-14.
[51] Nvidia Deep Learning Accelerator (NVDLA). Accessed: Dec. 10, 2022.
[Online]. Available: http://nvdla.org/
[52] B. Zimmer et al., " A 0.32-128 TOPS, scalable multi-chip-modulebased
deep neural network inference accelerator with ground-referenced
signaling in 16 nm, " IEEE J. Solid-State Circuits, vol. 55, no. 4,
pp. 920-932, Apr. 2020.
[53] J. W. Poulton et al., " A 1.17-pJ/b, 25-Gb/s/pin ground-referenced
single-ended serial link for off-and on-package communication using
a process-and temperature-adaptive voltage regulator, " IEEE J. SolidState
Circuits, vol. 54, no. 1, pp. 43-54, Jan. 2019.
[54] R. Venkatesan et al., " A 0.11 pJ/OP, 0.32-128 TOPS, scalable multichip-module-based
deep neural network accelerator designed with a
high-productivity VLSI methodology, " in Proc. IEEE Hot Chips Symp.
(HCS), Aug. 2019, pp. 1-24.
[55] S.-G. Cho et al., " PETRA: A 22 nm 6.97 TFLOPS/W AIB-enabled
configurable matrix and convolution accelerator integrated with an
Intel Stratix 10 FPGA, " in Proc. Symp. VLSI Circuits (VLSI), Jun. 2021,
pp. 1-2.
[56] R. Mahajan et al.,
" Embedded multi-die interconnect bridge
(EMIB)-A high density, high bandwidth packaging interconnect, " in
Proc. IEEE Electron. Compon. Technol. Conf. (ECTC), May/Jun. 2016,
pp. 557-565.
[57] D. Greenhill et al., " 3.3 A 14 nm 1 GHz FPGA with 2.5D transceiver
integration, " in Proc. Int. Solid-State Circuits Conf. (ISSCC), Feb. 2017,
pp. 54-55.
[58] C. Liu, J. Botimer, and Z. Zhang, " A 256 Gb/s/mm-shoreline AIBcompatible
16 nm FinFET CMOS chiplet for 2.5D integration with Stratix
10 FPGA on EMIB and tiling on silicon interposer, " in Proc. IEEE Custom
Integr. Circuits Conf. (CICC), Apr. 2021, pp. 1-2.
IEEE CIRCUITS AND SYSTEMS MAGAZINE
53
https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/ https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/ https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-solution-brief.html https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-solution-brief.html https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-solution-brief.html http://www.nvdla.org/

IEEE Circuits and Systems Magazine - Q3 2023

Table of Contents for the Digital Edition of IEEE Circuits and Systems Magazine - Q3 2023

Contents
IEEE Circuits and Systems Magazine - Q3 2023 - Cover1
IEEE Circuits and Systems Magazine - Q3 2023 - Cover2
IEEE Circuits and Systems Magazine - Q3 2023 - Contents
IEEE Circuits and Systems Magazine - Q3 2023 - 2
IEEE Circuits and Systems Magazine - Q3 2023 - 3
IEEE Circuits and Systems Magazine - Q3 2023 - 4
IEEE Circuits and Systems Magazine - Q3 2023 - 5
IEEE Circuits and Systems Magazine - Q3 2023 - 6
IEEE Circuits and Systems Magazine - Q3 2023 - 7
IEEE Circuits and Systems Magazine - Q3 2023 - 8
IEEE Circuits and Systems Magazine - Q3 2023 - 9
IEEE Circuits and Systems Magazine - Q3 2023 - 10
IEEE Circuits and Systems Magazine - Q3 2023 - 11
IEEE Circuits and Systems Magazine - Q3 2023 - 12
IEEE Circuits and Systems Magazine - Q3 2023 - 13
IEEE Circuits and Systems Magazine - Q3 2023 - 14
IEEE Circuits and Systems Magazine - Q3 2023 - 15
IEEE Circuits and Systems Magazine - Q3 2023 - 16
IEEE Circuits and Systems Magazine - Q3 2023 - 17
IEEE Circuits and Systems Magazine - Q3 2023 - 18
IEEE Circuits and Systems Magazine - Q3 2023 - 19
IEEE Circuits and Systems Magazine - Q3 2023 - 20
IEEE Circuits and Systems Magazine - Q3 2023 - 21
IEEE Circuits and Systems Magazine - Q3 2023 - 22
IEEE Circuits and Systems Magazine - Q3 2023 - 23
IEEE Circuits and Systems Magazine - Q3 2023 - 24
IEEE Circuits and Systems Magazine - Q3 2023 - 25
IEEE Circuits and Systems Magazine - Q3 2023 - 26
IEEE Circuits and Systems Magazine - Q3 2023 - 27
IEEE Circuits and Systems Magazine - Q3 2023 - 28
IEEE Circuits and Systems Magazine - Q3 2023 - 29
IEEE Circuits and Systems Magazine - Q3 2023 - 30
IEEE Circuits and Systems Magazine - Q3 2023 - 31
IEEE Circuits and Systems Magazine - Q3 2023 - 32
IEEE Circuits and Systems Magazine - Q3 2023 - 33
IEEE Circuits and Systems Magazine - Q3 2023 - 34
IEEE Circuits and Systems Magazine - Q3 2023 - 35
IEEE Circuits and Systems Magazine - Q3 2023 - 36
IEEE Circuits and Systems Magazine - Q3 2023 - 37
IEEE Circuits and Systems Magazine - Q3 2023 - 38
IEEE Circuits and Systems Magazine - Q3 2023 - 39
IEEE Circuits and Systems Magazine - Q3 2023 - 40
IEEE Circuits and Systems Magazine - Q3 2023 - 41
IEEE Circuits and Systems Magazine - Q3 2023 - 42
IEEE Circuits and Systems Magazine - Q3 2023 - 43
IEEE Circuits and Systems Magazine - Q3 2023 - 44
IEEE Circuits and Systems Magazine - Q3 2023 - 45
IEEE Circuits and Systems Magazine - Q3 2023 - 46
IEEE Circuits and Systems Magazine - Q3 2023 - 47
IEEE Circuits and Systems Magazine - Q3 2023 - 48
IEEE Circuits and Systems Magazine - Q3 2023 - 49
IEEE Circuits and Systems Magazine - Q3 2023 - 50
IEEE Circuits and Systems Magazine - Q3 2023 - 51
IEEE Circuits and Systems Magazine - Q3 2023 - 52
IEEE Circuits and Systems Magazine - Q3 2023 - 53
IEEE Circuits and Systems Magazine - Q3 2023 - 54
IEEE Circuits and Systems Magazine - Q3 2023 - 55
IEEE Circuits and Systems Magazine - Q3 2023 - 56
IEEE Circuits and Systems Magazine - Q3 2023 - 57
IEEE Circuits and Systems Magazine - Q3 2023 - 58
IEEE Circuits and Systems Magazine - Q3 2023 - 59
IEEE Circuits and Systems Magazine - Q3 2023 - 60
IEEE Circuits and Systems Magazine - Q3 2023 - 61
IEEE Circuits and Systems Magazine - Q3 2023 - 62
IEEE Circuits and Systems Magazine - Q3 2023 - 63
IEEE Circuits and Systems Magazine - Q3 2023 - 64
IEEE Circuits and Systems Magazine - Q3 2023 - 65
IEEE Circuits and Systems Magazine - Q3 2023 - 66
IEEE Circuits and Systems Magazine - Q3 2023 - 67
IEEE Circuits and Systems Magazine - Q3 2023 - 68
IEEE Circuits and Systems Magazine - Q3 2023 - 69
IEEE Circuits and Systems Magazine - Q3 2023 - 70
IEEE Circuits and Systems Magazine - Q3 2023 - 71
IEEE Circuits and Systems Magazine - Q3 2023 - 72
IEEE Circuits and Systems Magazine - Q3 2023 - 73
IEEE Circuits and Systems Magazine - Q3 2023 - 74
IEEE Circuits and Systems Magazine - Q3 2023 - 75
IEEE Circuits and Systems Magazine - Q3 2023 - 76
IEEE Circuits and Systems Magazine - Q3 2023 - 77
IEEE Circuits and Systems Magazine - Q3 2023 - 78
IEEE Circuits and Systems Magazine - Q3 2023 - 79
IEEE Circuits and Systems Magazine - Q3 2023 - 80
IEEE Circuits and Systems Magazine - Q3 2023 - 81
IEEE Circuits and Systems Magazine - Q3 2023 - 82
IEEE Circuits and Systems Magazine - Q3 2023 - 83
IEEE Circuits and Systems Magazine - Q3 2023 - 84
IEEE Circuits and Systems Magazine - Q3 2023 - 85
IEEE Circuits and Systems Magazine - Q3 2023 - 86
IEEE Circuits and Systems Magazine - Q3 2023 - 87
IEEE Circuits and Systems Magazine - Q3 2023 - 88
IEEE Circuits and Systems Magazine - Q3 2023 - Cover3
IEEE Circuits and Systems Magazine - Q3 2023 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2023Q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2022Q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021Q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2021q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2020q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2019q1
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q4
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q3
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q2
https://www.nxtbook.com/nxtbooks/ieee/circuitsandsystems_2018q1
https://www.nxtbookmedia.com