Signal Processing - May 2017 - 41
connecting points EL and FL is an example of a time-intensity
panning curve. Here, an ICTD of −0.5 ms combined with an
ICLD of −12 dB will result in a virtual source aligned with the
left loudspeaker. As the ICTD and ICLD are increased toward
zero, the virtual source shifts from the left direction to the
midline direction. Increasing the ICTD and ICLD further to
0.5 ms and 12 dB, respectively, shifts the virtual source to the
direction of the right loudspeaker.
Based on whether interchannel time and level differences
are obtained naturally while recording an acoustic scene or are
introduced artificially, stereophony can be divided into two
categories: recorded (true) stereophony and synthetic stereophony [27]. Recorded stereophony is constrained by the characteristics of the available physical microphones, primarily
in terms of their directivity patterns. Microphones with firstorder directivity patterns are typically used because of their
affordability and availability. These microphones have directivity patterns of the following types:
C L (i) = (1 - a L) + a L cos (i - i L),
(3)
C R (i) = (1 - a R) + a R cos (i - i R),
(4)
where C L (i) and C R (i) are the directivity patterns that represent the directional sensitivity of the left and the right
microphones, respectively; i is the angle defined counterclockwise from the acoustic axis of the corresponding microphone; and i L and i R are the rotation angles of the left and
the right microphones, respectively. Designing stereophonic
microphone pairs then requires optimizing the ICTD and
ICLD by careful selection of 1) a L and a R, 2) i L and i R, and
3) the distance D between the two microphones.
One of the first stereophonic recording microphone
pairs was developed by Alan Blumlein and consisted of two
coincident bidirectional microphones (i.e., a L = a R = 1)
positioned at right angles with each other. Many different microphone configurations have been devised since
then. These methods can be categorized roughly into three
groups: coincident, near coincident, and spaced [28]. Coincident pairs have two colocated (D = 0) directional microphones, resulting in the recorded left and right channel
signals that have only amplitude differences. Examples of
coincident microphone pairs are the Blumlein pair, the XY
stereo pair, and the M/S pair [28]. Spaced arrays, such as the
AB pair [28], typically use omnidirectional microphones
(a L = a R = 0), with a separation D that is many multiples
of the desired wavelength. This makes the ICTD the main
cue used to pan the sound source. Near-coincident recording techniques, on the other hand, use directional microphones separated by a small distance comparable to the size
of a human head and record both the ICTD and ICLD. Two
notable examples are the Nederlandse Omroep Stichting
(NOS) and Office de Radiodiffusion Télévision Française
(ORTF) pairs, which both use cardioid (a L = a R = 0.5)
microphones and have separations of D ORTF = 17 cm and
D NOS = 30 cm [28].
Synthetic stereophony has been predominantly based on
intensity panning, since it is thought to provide the most stable
virtual sound imaging. Indeed, the inclusion of ICTDs is sometimes considered to yield audible artifacts, such as tonal coloration due to comb filter effects. Another often-cited reason to
avoid using ICTDs is the difficulty of controlling the direction
of a virtual source by means of time delays. This view has been
recently challenged [29] and will be discussed in the "Perceptual Sound Field Reconstruction" section.
The general form of an intensity panning law relates the
gains g L and g R of the left and right loudspeakers, respectively, to a function of the source direction i s and the stereophonic
base angle i B between the loudspeakers. More specifically, a
panning law has the form
g L (i s) - g R (i s)
f (i s)
=
.
f (i B)
g L (i s) + g R (i s)
(5)
The total power can be maintained via the constant power
constraint g L (i s) 2 + g R (i s) 2 = 1. Two commonly used functions are f (i) = sin (i) and f (i) = tan (i), which give rise
to the so-called sine panning law and tangent panning
law, respectively.
The tangent panning law was derived based on perceptual
considerations independent of known psychoacoustic curves
[30]. In the context of Williams's psychoacoustic curves [see
Figure 1(b)], the tangent panning law operates along the vertical
axis, i.e., zero ICTD, and connects two points with ! 3 level
differences. Thus, as opposed to panning laws described by
Williams's curves, which specify the minimal level differences
needed to create virtual sources in loudspeaker directions, the
tangent law achieves the same effect by employing maximal
level differences.
Multichannel stereophony
An early work by Steinberg and Snow [31] in 1934 suggested
that a better auditory perspective is possible if at least three
independent microphones are used to capture a frontal sound
field and these signals are played back via three loudspeakers.
Due to the hardware requirements and technical difficulties in
the integration of a three-channel system in radio broadcasts,
however, this finding has been obscured by the success and
widespread adoption of two-channel stereophony.
The advent of quadrophony and cinematic sound spurred
interest in multichannel systems. Traditionally, there are two
different types of multichannel audio formats: discrete and
matrix [32], [33]. In discrete multichannel audio, there is a oneto-one correspondence between channels and speakers. The
storage and transmission of multichannel audio are made using
the same number of channels. In matrix multichannel, the original channels are encoded to a smaller number of channels (e.g.,
two) for transmission or storage over common channels or media
and then decoded back to the original channel multiplicity prior
to playback. This requires appending auxiliary information to
the encoded audio to be used at the decoding stage. More recently, object-based formats have appeared where content and context are encoded separately.
IEEE Signal Processing Magazine
|
May 2017
|
41
Table of Contents for the Digital Edition of Signal Processing - May 2017
Signal Processing - May 2017 - Cover1
Signal Processing - May 2017 - Cover2
Signal Processing - May 2017 - 1
Signal Processing - May 2017 - 2
Signal Processing - May 2017 - 3
Signal Processing - May 2017 - 4
Signal Processing - May 2017 - 5
Signal Processing - May 2017 - 6
Signal Processing - May 2017 - 7
Signal Processing - May 2017 - 8
Signal Processing - May 2017 - 9
Signal Processing - May 2017 - 10
Signal Processing - May 2017 - 11
Signal Processing - May 2017 - 12
Signal Processing - May 2017 - 13
Signal Processing - May 2017 - 14
Signal Processing - May 2017 - 15
Signal Processing - May 2017 - 16
Signal Processing - May 2017 - 17
Signal Processing - May 2017 - 18
Signal Processing - May 2017 - 19
Signal Processing - May 2017 - 20
Signal Processing - May 2017 - 21
Signal Processing - May 2017 - 22
Signal Processing - May 2017 - 23
Signal Processing - May 2017 - 24
Signal Processing - May 2017 - 25
Signal Processing - May 2017 - 26
Signal Processing - May 2017 - 27
Signal Processing - May 2017 - 28
Signal Processing - May 2017 - 29
Signal Processing - May 2017 - 30
Signal Processing - May 2017 - 31
Signal Processing - May 2017 - 32
Signal Processing - May 2017 - 33
Signal Processing - May 2017 - 34
Signal Processing - May 2017 - 35
Signal Processing - May 2017 - 36
Signal Processing - May 2017 - 37
Signal Processing - May 2017 - 38
Signal Processing - May 2017 - 39
Signal Processing - May 2017 - 40
Signal Processing - May 2017 - 41
Signal Processing - May 2017 - 42
Signal Processing - May 2017 - 43
Signal Processing - May 2017 - 44
Signal Processing - May 2017 - 45
Signal Processing - May 2017 - 46
Signal Processing - May 2017 - 47
Signal Processing - May 2017 - 48
Signal Processing - May 2017 - 49
Signal Processing - May 2017 - 50
Signal Processing - May 2017 - 51
Signal Processing - May 2017 - 52
Signal Processing - May 2017 - 53
Signal Processing - May 2017 - 54
Signal Processing - May 2017 - 55
Signal Processing - May 2017 - 56
Signal Processing - May 2017 - 57
Signal Processing - May 2017 - 58
Signal Processing - May 2017 - 59
Signal Processing - May 2017 - 60
Signal Processing - May 2017 - 61
Signal Processing - May 2017 - 62
Signal Processing - May 2017 - 63
Signal Processing - May 2017 - 64
Signal Processing - May 2017 - 65
Signal Processing - May 2017 - 66
Signal Processing - May 2017 - 67
Signal Processing - May 2017 - 68
Signal Processing - May 2017 - 69
Signal Processing - May 2017 - 70
Signal Processing - May 2017 - 71
Signal Processing - May 2017 - 72
Signal Processing - May 2017 - 73
Signal Processing - May 2017 - 74
Signal Processing - May 2017 - 75
Signal Processing - May 2017 - 76
Signal Processing - May 2017 - 77
Signal Processing - May 2017 - 78
Signal Processing - May 2017 - 79
Signal Processing - May 2017 - 80
Signal Processing - May 2017 - 81
Signal Processing - May 2017 - 82
Signal Processing - May 2017 - 83
Signal Processing - May 2017 - 84
Signal Processing - May 2017 - 85
Signal Processing - May 2017 - 86
Signal Processing - May 2017 - 87
Signal Processing - May 2017 - 88
Signal Processing - May 2017 - 89
Signal Processing - May 2017 - 90
Signal Processing - May 2017 - 91
Signal Processing - May 2017 - 92
Signal Processing - May 2017 - 93
Signal Processing - May 2017 - 94
Signal Processing - May 2017 - 95
Signal Processing - May 2017 - 96
Signal Processing - May 2017 - 97
Signal Processing - May 2017 - 98
Signal Processing - May 2017 - 99
Signal Processing - May 2017 - 100
Signal Processing - May 2017 - 101
Signal Processing - May 2017 - 102
Signal Processing - May 2017 - 103
Signal Processing - May 2017 - 104
Signal Processing - May 2017 - 105
Signal Processing - May 2017 - 106
Signal Processing - May 2017 - 107
Signal Processing - May 2017 - 108
Signal Processing - May 2017 - 109
Signal Processing - May 2017 - 110
Signal Processing - May 2017 - 111
Signal Processing - May 2017 - 112
Signal Processing - May 2017 - Cover3
Signal Processing - May 2017 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201809
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201807
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201805
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201803
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201801
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1117
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0917
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0717
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0517
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0317
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0117
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1116
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0916
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0716
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0516
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0316
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0116
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1115
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0915
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0715
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0515
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0315
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0115
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1114
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0914
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0714
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0514
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0314
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0114
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1113
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0913
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0713
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0513
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0313
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0113
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1112
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0912
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0712
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0512
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0312
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0112
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1111
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0911
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0711
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0511
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0311
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0111
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1110
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0910
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0710
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0510
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0310
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0110
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1109
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0909
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0709
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0509
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0309
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0109
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1108
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0908
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0708
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0508
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0308
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0108
https://www.nxtbookmedia.com