Signal Processing - January 2017 - 53

number of pixels being reconstructed. Given these measurements, an image is recovered by searching for the image that
is sparsest in some transform basis (wavelets, DCT, or other)
while being consistent with the measurements.
In essence, CS provides a framework to sense signals with
far fewer measurements than their ambient dimensionality (i.e.,
Nyquist rate), which translates to practical benefits including
decreased sensor cost, bandwidth, and time of acquisition. These
benefits are most compelling for imaging modalities where sensing is expensive; examples include imaging in the nonvisible
spectrum (where sensors are costly), imaging at high spatial and
temporal resolutions (where the high bandwidth of sensed data
requires costly electronics), and medical imaging (where the time
of acquisition translates to costs or where existing equipment is
too slow to acquire certain dynamic events). In this context, architectures like the single-pixel camera (SPC) [27] provide a promising proof of concept that still images can be acquired using a
small number of coded measurements with inexpensive sensors.
There are numerous applications where it is desirable to
extend the CS imaging framework beyond still images to incorporate video. After all, motion is ubiquitous in the real world,
and capturing the dynamics of a scene requires us to go beyond
static images. A hidden benefit of video is that it offers tremendous opportunities for more dramatic undersampling (the ratio
of signal dimensionality to measurement dimensionality). That

is, we can exploit the rich temporal redundancies in a video to
reconstruct frames from far fewer measurements than is possible with still images. Yet the demands of video CS in terms
of the complexity of imaging architectures, signal models, and
reconstruction algorithms are significantly greater than those of
compressive still-frame imaging.
There are three major reasons that the design and implementation of CS video systems are significantly more difficult
than those of CS still-imaging systems. The first challenge is
the gap between compression and CS. State-of-the-art video
models rely on two powerful ideas: first, motion fields enable
the accurate prediction of image frames by propagating intensities across frames; second, motion fields are inherently more
compressible than the video itself. This observation has led
to today's state-of-the-art video compression algorithms (not
to be confused with CS of videos) that exploit motion information in one of many ways, including block-based motion
estimation (MPEG-1), per-pixel optical flow (H.265), and
wavelet lifting (LIMAT). Motion fields enable models that
can be tuned to the specific video that is being sensed/processed. This is a powerful premise that typically provides an
order of magnitude improvement in video compression over
image compression.
The use of motion fields for video CS raises an important
challenge. Unlike the standard video compression problem,

What Is the Nyquist Rate of a Video Signal?
Conventional videos, sampled at 24-60 frames/second
(fps), may, in fact, be highly undersampled in time-
objects in the scene can move multiple pixels between
adjacent frames. Some compressive sensing (CS) architectures, however, measure a video at a much higher temporal rate. For example, the single-pixel camera (SPC) may
take tens of thousands of serial measurements per second.
In such cases, the scene may change very little between
adjacent measurements. This raises some interesting questions: what is the Nyquist rate of a video signal, and how
does it compare to CS measurement rates?
One can gain insight into these questions by considering
the three-dimensional analog video signal that arrives at a
camera lens; both conventional and CS imaging systems
can be viewed as blurring this signal spatially (due to the
optics and the pixelated sensors) and sampling or measuring
it digitally. If a video consists of moving objects with sharp
edges, then the analog video will actually have infinite bandwidth in both the spatial and temporal dimensions. However,
it can be argued that the support of the video's spectrum will
tend to be localized into a certain bowtie shape, as shown
in blue in Figure S1. The salient feature of this shape is that
high temporal frequencies coincide only with high spatial
frequencies. Thus, because of the limited spatial resolution of

Temporal Frequency
Spectral Support
of Analog Video
Spatial Frequency

Temporal
Bandwidth
of Sampled
Video
Spatial
Resolution
of Optics

FIGURE S1. The limited spatial resolution of an imaging system may
also limit its temporal bandwidth.
both the camera optics and the pixel sensors, when the spatial bandwidth of the video is limited, so too is its temporal
bandwidth, as illustrated by the black rectangle in the figure.
This suggests that the video sensed by architectures such as
the SPC may in fact have a finite temporal bandwidth, and
this fact can be used to reduce the computational complexity
of sensing and reconstructing the video. In particular, it is not
necessary to reconstruct at a rate of thousands of fps.
Additional details are provided in [62].

IEEE SIgnal ProcESSIng MagazInE

|

January 2017

|

53



Table of Contents for the Digital Edition of Signal Processing - January 2017

Signal Processing - January 2017 - Cover1
Signal Processing - January 2017 - Cover2
Signal Processing - January 2017 - 1
Signal Processing - January 2017 - 2
Signal Processing - January 2017 - 3
Signal Processing - January 2017 - 4
Signal Processing - January 2017 - 5
Signal Processing - January 2017 - 6
Signal Processing - January 2017 - 7
Signal Processing - January 2017 - 8
Signal Processing - January 2017 - 9
Signal Processing - January 2017 - 10
Signal Processing - January 2017 - 11
Signal Processing - January 2017 - 12
Signal Processing - January 2017 - 13
Signal Processing - January 2017 - 14
Signal Processing - January 2017 - 15
Signal Processing - January 2017 - 16
Signal Processing - January 2017 - 17
Signal Processing - January 2017 - 18
Signal Processing - January 2017 - 19
Signal Processing - January 2017 - 20
Signal Processing - January 2017 - 21
Signal Processing - January 2017 - 22
Signal Processing - January 2017 - 23
Signal Processing - January 2017 - 24
Signal Processing - January 2017 - 25
Signal Processing - January 2017 - 26
Signal Processing - January 2017 - 27
Signal Processing - January 2017 - 28
Signal Processing - January 2017 - 29
Signal Processing - January 2017 - 30
Signal Processing - January 2017 - 31
Signal Processing - January 2017 - 32
Signal Processing - January 2017 - 33
Signal Processing - January 2017 - 34
Signal Processing - January 2017 - 35
Signal Processing - January 2017 - 36
Signal Processing - January 2017 - 37
Signal Processing - January 2017 - 38
Signal Processing - January 2017 - 39
Signal Processing - January 2017 - 40
Signal Processing - January 2017 - 41
Signal Processing - January 2017 - 42
Signal Processing - January 2017 - 43
Signal Processing - January 2017 - 44
Signal Processing - January 2017 - 45
Signal Processing - January 2017 - 46
Signal Processing - January 2017 - 47
Signal Processing - January 2017 - 48
Signal Processing - January 2017 - 49
Signal Processing - January 2017 - 50
Signal Processing - January 2017 - 51
Signal Processing - January 2017 - 52
Signal Processing - January 2017 - 53
Signal Processing - January 2017 - 54
Signal Processing - January 2017 - 55
Signal Processing - January 2017 - 56
Signal Processing - January 2017 - 57
Signal Processing - January 2017 - 58
Signal Processing - January 2017 - 59
Signal Processing - January 2017 - 60
Signal Processing - January 2017 - 61
Signal Processing - January 2017 - 62
Signal Processing - January 2017 - 63
Signal Processing - January 2017 - 64
Signal Processing - January 2017 - 65
Signal Processing - January 2017 - 66
Signal Processing - January 2017 - 67
Signal Processing - January 2017 - 68
Signal Processing - January 2017 - 69
Signal Processing - January 2017 - 70
Signal Processing - January 2017 - 71
Signal Processing - January 2017 - 72
Signal Processing - January 2017 - 73
Signal Processing - January 2017 - 74
Signal Processing - January 2017 - 75
Signal Processing - January 2017 - 76
Signal Processing - January 2017 - 77
Signal Processing - January 2017 - 78
Signal Processing - January 2017 - 79
Signal Processing - January 2017 - 80
Signal Processing - January 2017 - 81
Signal Processing - January 2017 - 82
Signal Processing - January 2017 - 83
Signal Processing - January 2017 - 84
Signal Processing - January 2017 - 85
Signal Processing - January 2017 - 86
Signal Processing - January 2017 - 87
Signal Processing - January 2017 - 88
Signal Processing - January 2017 - 89
Signal Processing - January 2017 - 90
Signal Processing - January 2017 - 91
Signal Processing - January 2017 - 92
Signal Processing - January 2017 - 93
Signal Processing - January 2017 - 94
Signal Processing - January 2017 - 95
Signal Processing - January 2017 - 96
Signal Processing - January 2017 - 97
Signal Processing - January 2017 - 98
Signal Processing - January 2017 - 99
Signal Processing - January 2017 - 100
Signal Processing - January 2017 - 101
Signal Processing - January 2017 - 102
Signal Processing - January 2017 - 103
Signal Processing - January 2017 - 104
Signal Processing - January 2017 - 105
Signal Processing - January 2017 - 106
Signal Processing - January 2017 - 107
Signal Processing - January 2017 - 108
Signal Processing - January 2017 - 109
Signal Processing - January 2017 - 110
Signal Processing - January 2017 - 111
Signal Processing - January 2017 - 112
Signal Processing - January 2017 - 113
Signal Processing - January 2017 - 114
Signal Processing - January 2017 - 115
Signal Processing - January 2017 - 116
Signal Processing - January 2017 - Cover3
Signal Processing - January 2017 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201809
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201807
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201805
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201803
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_201801
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1117
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0917
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0717
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0517
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0317
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0117
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1116
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0916
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0716
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0516
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0316
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0116
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1115
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0915
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0715
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0515
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0315
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0115
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1114
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0914
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0714
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0514
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0314
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0114
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1113
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0913
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0713
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0513
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0313
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0113
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1112
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0912
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0712
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0512
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0312
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0112
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1111
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0911
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0711
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0511
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0311
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0111
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1110
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0910
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0710
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0510
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0310
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0110
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1109
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0909
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0709
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0509
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0309
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0109
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_1108
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0908
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0708
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0508
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0308
https://www.nxtbook.com/nxtbooks/ieee/signalprocessing_0108
https://www.nxtbookmedia.com