IEEE Systems, Man and Cybernetics Magazine - April 2023 - 38

[13] R. Op den Akker and D. R. Traum, " A comparison of addressee detection methods
for multiparty conversations, " in Proc. 13th Semdial Workshop Semantics Pragmatics
Dialogue, Stockholm, Sweden, 2009, pp. 99-106.
[14] H. J. Nock, G. Iyengar, and C. Neti, " Speaker localisation using audio-visual synchrony:
An empirical study, " in Proc. Int. Conf. Image Video Retrieval, Springer-Verlag,
2003, pp. 488-499.
[15] T. Stafylakis and G. Tzimiropoulos, " Combining residual networks with LSTMs
for lipreading, " in Proc. 18th Annu. Conf. Int. Speech Commun. Assoc., Stockholm,
Sweden, F. Lacerda, Ed. Aug. 20-24, 2017, pp. 3652-3656.
[16] J. Roth et al., " Ava active speaker: An audio-visual dataset for active speaker
detection, " in Proc. ICASSP IEEE Int. Conf. Acoust., Speech Signal Process., 2020,
pp. 4492-4496, doi: 10.1109/ICASSP40776.2020.9053900.
[17] D. B. Jayagopi et al., " The vernissage corpus: A conversational human-robotinteraction
dataset, " in Proc. 8th ACM/IEEE Int. Conf. Hum.-Robot Interact. (HRI),
2013, pp. 149-150, doi: 10.1109/HRI.2013.6483545.
[18] P. Holthaus et al., " How to address smart homes with a social robot? A multimodal
corpus of user interactions with an intelligent environment, " in Proc. 10th Int.
Conf. Lang. Resour. Eval. (LREC'16), 2016, pp. 3440-3446.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, " ImageNet classification with deep
convolutional neural networks, " in Proc. Adv. Neural Inf. Process. Syst., 2012, vol. 25,
pp. 1-9.
[20] S. Ren, K. He, R. Girshick, and J. Sun, " Faster R-CNN: Towards real-time object
detection with region proposal networks, " in Proc. Adv. Neural Inf. Process. Syst.,
2015, vol. 28, pp. 1-9.
[21] K. S. Tai, R. Socher, and C. D. Manning, " Improved semantic representations from
tree-structured long short-term memory networks, " 2015, arXiv:1503.00075.
[22] G. E. Dahl, D. Yu, L. Deng, and A. Acero, " Context-dependent pre-trained deep
neural networks for large-vocabulary speech recognition, " IEEE Trans. Audio, Speech,
Language Process.* (2006-2013), vol. 20, no. 1, pp. 30-42, Jan. 2012, doi: 10.1109/
TASL.2011.2134090.
[23] N. Jovanovic et al., " A corpus for studying addressing behaviour in multi-party
dialogues, " Lang. Resour. Eval., vol. 40, no. 1, pp. 5-23, 2006, doi: 10.1007/s10579006-9006-4.
[24]
T. Tsai, A. Stolcke, and M. Slaney, " Multimodal addressee detection in multiparty
dialogue systems, " in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP),
2015, pp. 2314-2318, doi: 10.1109/ICASSP.2015.7178384.
[25] D. Bohus and E. Horvitz, " Facilitating multiparty dialog with gaze, gesture, and
speech, " in Proc. Int. Conf. Multimodal Interfaces Workshop Mach. Learn. Multimodal
Interact., New York, NY, USA: Association for Computing Machinery, 2010, pp.
1-8, doi: 10.1145/1891903.1891910.
[26] W. Kraaij, T. Hain, M. Lincoln, and W. Post, " The AMI meeting corpus, " in Proc. 5th
Int. Conf. Methods Techn. Behavioral Res., 2005, vol. 88, pp. 1-6.
[27] M. Koutsombogera and C. Vogel, " Modeling collaborative multimodal behavior in
group dialogues: The MULTISIMO corpus, " in Proc. 11th Int. Conf. Lang. Resour. Eval.
(LREC), 2018, pp. 1-7.
[28] A. Recasens, A. Khosla, C. Vondrick, and A. Torralba, " Where are they looking? " in
Proc. Adv. Neural Inf. Process. Syst., C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and
R. Garnett, Eds. Curran Associates, 2015, vol. 28, pp. 1-9.
[29] H. Op den Akker and R. Op den Akker, " Are you being addressed?-real-time
addressee detection to support remote participants in hybrid meetings, " in Proc. SIGDIAL
Conf., 2009, pp. 21-28.
[30] N. Baba, H.-H. Huang, and Y. I. Nakano, " Identifying utterances addressed to
an agent in multiparty human-agent conversations, " in Proc. Int. Workshop Intell.
Virtual Agents, Springer-Verlag, 2011, pp. 255-261, doi: 10.1007/978-3-642-23974-8_28.
[31] U. Malik, M. Barange, N. Ghannad, J. Saunier, and A. Pauchet, " A generic
machine learning based approach for addressee detection in multiparty interac38
IEEE SYSTEMS, MAN, & CYBERNETICS MAGAZINE April 2023
tion, " in Proc. 19th ACM Int. Conf. Intell. Virtual Agents, 2019, pp. 119-126, doi:
10.1145/3308532.3329462.
[32] U. Malik, M. Barange, J. Saunier, and A. Pauchet, " A novel focus encoding scheme
for addressee detection in multiparty interaction using machine learning algorithms, "
J. Multimodal User Interfaces, vol. 15, no. 2, pp. 175-188, 2021, doi: 10.1007/s12193020-00361-9.
[33]
Y. Hu, J. S. Ren, J. Dai, C. Yuan, L. Xu, and W. Wang, " Deep multimodal speaker
naming, " in Proc. 23rd ACM Int. Conf. Multimedia, 2015, pp. 1107-1110, doi:
10.1145/2733373.2806293.
[34] J. Ren et al., " Look, listen and learn-A multimodal LSTM for speaker identification, "
in Proc. 30th AAAI Conf. Artif. Intell., 2016, vol. 30, no. 1, pp. 3581-3587, doi:
10.1609/aaai.v30i1.10471.
[35] O. Canévet, W. He, P. Motlicek, and J.-M. Odobez, " The MuMMER data set for
robot perception in multi-party HRI scenarios, " in Proc. 29th IEEE Int. Conf. Robot
Hum. Interactive Commun. (RO-MAN), 2020, pp. 1294-1300, doi: 10.1109/ROMAN47096.2020.9223340.
[36]
I. H. Witten, E. Frank, L. E. Trigg, M. A. Hall, G. Holmes, and S. J. Cunningham,
" Weka: Practical machine learning tools and techniques with java implementations, "
in Proc. ICONIP/ANZIIS/ANNES Workshop Emerg. Knowl. Eng. Connectionist-Based
Inf. Syst., 1999, pp. 192-196.
[37] K. H. Brodersen, C. S. Ong, K. E. Stephan, and J. M. Buhmann, " The balanced
accuracy and its posterior distribution, " in Proc. 20th Int. Conf. Pattern Recognit.,
2010, pp. 3121-3124, doi: 10.1109/ICPR.2010.764.
[38] R. Tao, Z. Pan, R. K. Das, X. Qian, M. Z. Shou, and H. Li, " Is someone speaking?
Exploring long-term temporal features for audio-visual active speaker
detection, " in Proc. 29th ACM Int. Conf. Multimedia, 2021, pp. 3927-3935, doi:
10.1145/3474085.3475587.
[39] T. Afouras, J. S. Chung, and A. Zisserman, " The conversation: Deep audio-visual
speech enhancement, " 2018, arXiv:1804.04121.
[40] Y.-H. Zhang, J. Xiao, S. Yang, and S. Shan, " Multi-task learning for audio-visual
active speaker detection, " in Proc. ActivityNet Large-Scale Activity Recognit. Challenge,
2019, pp. 1-4.
[41] J. Hu, L. Shen, and G. Sun, " Squeeze-and-excitation networks, " in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7132-7141, doi: 10.1109/CVPR.
2018.00745.
[42] J. S. Chung et al., " In defence of metric learning for speaker recognition, " 2020,
arXiv:2003.11982.
[43] J. S. Chung and A. Zisserman, " Out of time: Automated lip sync in the wild, " in
Proc. Asian Conf. Comput. Vis., Springer-Verlag, 2016, pp. 251-263.
[44] A. Vaswani et al., " Attention is all you need, " in Proc. Adv. Neural Inf. Process.
Syst., 2017, vol. 30, pp. 1-11.
[45] T.-Y. Lin, A. RoyChowdhury, and S. Maji, " Bilinear CNN models for fine-grained
visual recognition, " in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2015, pp. 1449-1457,
doi: 10.1109/ICCV.2015.170.
[46] M. Hosseini, M. Horton, H. Paneliya, U. Kallakuri, H. Homayoun, and T.
Mohsenin, " On the complexity reduction of dense layers from O(N2) to O(NlogN)
with cyclic sparsely connected layers, " in Proc. 56th Annu. Des. Autom. Conf.,
New York, NY, USA: Association for Computing Machinery, 2019, pp. 1-6, doi:
10.1145/3316781.3317873.
[47] Aldebaran United Robotics Group. Accessed: Jul. 20, 2022. [Online]. Available: http://
www.aldebaran-robotics.com/
[48] " Pepper. " Aldebaran United Robotics Group. Accessed: Jul. 19, 2022. [Online]. Available:
https://www.softbankrobotics.com/emea/en/pepper
[49] " DaVinci Resolve 18. " Blackmagicdesign. Accessed: Jul. 20, 2022. [Online].
Available: https://www.blackmagicdesign.com/products/davinciresolve
http://www.aldebaran-robotics.com/ http://www.aldebaran-robotics.com/ https://www.softbankrobotics.com/emea/en/pepper https://www.blackmagicdesign.com/products/davinciresolve

IEEE Systems, Man and Cybernetics Magazine - April 2023

Table of Contents for the Digital Edition of IEEE Systems, Man and Cybernetics Magazine - April 2023

IEEE Systems, Man and Cybernetics Magazine - April 2023 - Cover1
IEEE Systems, Man and Cybernetics Magazine - April 2023 - Cover2
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 1
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 2
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 3
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 4
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 5
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 6
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 7
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 8
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 9
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 10
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 11
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 12
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 13
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 14
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 15
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 16
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 17
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 18
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 19
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 20
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 21
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 22
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 23
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 24
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 25
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 26
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 27
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 28
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 29
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 30
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 31
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 32
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 33
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 34
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 35
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 36
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 37
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 38
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 39
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 40
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 41
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 42
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 43
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 44
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 45
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 46
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 47
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 48
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 49
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 50
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 51
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 52
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 53
IEEE Systems, Man and Cybernetics Magazine - April 2023 - 54
IEEE Systems, Man and Cybernetics Magazine - April 2023 - Cover3
IEEE Systems, Man and Cybernetics Magazine - April 2023 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/smc_202310
https://www.nxtbook.com/nxtbooks/ieee/smc_202307
https://www.nxtbook.com/nxtbooks/ieee/smc_202304
https://www.nxtbook.com/nxtbooks/ieee/smc_202301
https://www.nxtbook.com/nxtbooks/ieee/smc_202210
https://www.nxtbook.com/nxtbooks/ieee/smc_202207
https://www.nxtbook.com/nxtbooks/ieee/smc_202204
https://www.nxtbook.com/nxtbooks/ieee/smc_202201
https://www.nxtbook.com/nxtbooks/ieee/smc_202110
https://www.nxtbook.com/nxtbooks/ieee/smc_202107
https://www.nxtbook.com/nxtbooks/ieee/smc_202104
https://www.nxtbook.com/nxtbooks/ieee/smc_202101
https://www.nxtbook.com/nxtbooks/ieee/smc_202010
https://www.nxtbook.com/nxtbooks/ieee/smc_202007
https://www.nxtbook.com/nxtbooks/ieee/smc_202004
https://www.nxtbook.com/nxtbooks/ieee/smc_202001
https://www.nxtbook.com/nxtbooks/ieee/smc_201910
https://www.nxtbook.com/nxtbooks/ieee/smc_201907
https://www.nxtbook.com/nxtbooks/ieee/smc_201904
https://www.nxtbook.com/nxtbooks/ieee/smc_201901
https://www.nxtbook.com/nxtbooks/ieee/smc_201810
https://www.nxtbook.com/nxtbooks/ieee/smc_201807
https://www.nxtbook.com/nxtbooks/ieee/smc_201804
https://www.nxtbook.com/nxtbooks/ieee/smc_201801
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1017
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0717
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0417
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0117
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1016
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0716
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0416
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0116
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1015
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0715
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0415
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0115
https://www.nxtbookmedia.com