Here is the complete set of slides that are used during lectures. These slides are compiled at the end of the year and left here for reference. Updated slides are posted below as the class progresses.


This introductory block consists of a single lecture. It previews the class’s contents and introduces the concepts of signals and information. The slides from this lecture can be found here.

Discrete Signals

This block is an introduction to discrete signals. We start by defining discrete signals and introducing some common types of discrete signals that we will be using in this course. We then discuss some useful properties of discrete signals. Next, we introduce the concept of inner products and what they mean in relation to discrete signals. We finish by introducing discrete complex exponentials, their properties, and why they are useful for us in this class.

Discrete Fourier Transform (DFT)

This set of lectures introduces the DFT. We begin by defining the DFT and seeing how it relates to complex exponentials and inner products. We will look at the periodicity of the DFT and we will learn how to interpret the DFT as a rate of change of a signal. We will study the DFTs of some useful discrete signals.

We will go on to introduce the inverse discrete Fourier transform (iDFT). We will prove that the the iDFT is, in fact, the inverse of the DFT. We will then look at the iDFT both as an inner product (and how this compares to the DFT) and as a series of successive approximations to a signal. We will see how these approximations can be used to reconstruct a square pulse, and we will take this logic to the next step to see how we can use the DFT and iDFT to “clean up” a noisy signal.

Finally, we will investigate and prove several critical properties of the DFT and iDFT, including symmetry, energy conservation (Parseval’s Theorem), and linearity.

Continuous Time Signals and the Fourier Transform

This set of lectures introduces the Fourier Transform. Unlike the DFT, which operates in discrete time, the Fourier Transform (FT) operates in continuous time (i.e. “real” time). We will see that while the DFT is useful as a computational tool, the FT will allow us to investigate some properties that are harder to see with the DFT.

This set of lectures should look remarkably similar to the last. In fact, we are repeating much of the analysis that we already did with the DFT, only this time in continuous time. We will pay special attention to the relationship between the FT and the DFT – that is, how does the DFT serve as an approximation to the Fourier Transform? We will revisit some of the key proofs that we’ve seen earlier in the course, and establish some new useful properties such as shift and modulation.

Finally, we will learn about the convolution, and how it relates to these concepts of signal processing. We will see how the switch from time to frequency domains helps us to design and implement systems such as low-pass filters.


This set of lectures introduces concept of sampling. This is one of the most important ideas in this course. What happens when we sample a continuous (i.e. “real”) time signal in order to process it in a computer (a discrete machine)? We will see that it leads to a surprising result: sampling in time leads to periodization in frequency!

To lead us to this result, we will first study another transform, the discrete time Fourier transform (DTFT) and its inverse, the iDTFT. The analysis for this transform should look familiar. We will see how this tool leads us to an analysis of a Dirac train, a mathematical entity that will help us understand sampling. We will see the importance of a key relationship: convolution in time = multiplication in frequency.

We will learn what information is lost when we sample a signal, and how bandlimited signals allow us to mitigate this loss. We will see how we can use prefiltering (specifically, low-pass filters) to “artificially” bandlimit signals.

Linear Time Invariant (LTI) Systems

This set of lectures introduces linear time invariant systems. In these lectures we will learn about a specific, but very useful, class of systems known as LTIs. The key properties of LTIs are found in their very name: they are both linear (combinations of inputs yield the same combinations of outputs) and time invariant (delayed input yields delayed output).

A key feature of LTIs (also called filters) is that their behavior can be characterized entirely by their response to an impulse, in both time and frequency. This property allows us to use the tools we’ve learned thus far to design and implement filters to meet the specifications that we desire. The low-pass filter, which we have already seen, is one special case.

Midterm Review

These slides are simply a review of the material we’ve covered thus far. For more detail on each topic, refer to the respective slide deck and lecture notes.

Image Processing

In this set of lectures we will learn about image processing. We have now finished with the theory block of the course, and will move into applications. We will consider images to be two-dimensional signals, and we will expand our knowledge of one-dimensional signal processing to generalize the tools we have already learned.

We will see some of our key results again, such as the Inverse Theorem, Parseval’s theorem, and the Convolution Theorem (this should be looking very familiar by now!). We will use the tools we have learned thus far to blur, de-noise, sharpen, and compress an image. We will see that there are some problems with the 2D DFT that were not present for the 1D DFT, and so we will introduce the Discrete Cosine Transform (DCT) to remedy this.

Principal Component Analysis (PCA)

In this set of lectures we will learn about PCA. In the real world, all signals contain noise. With the DFT, we saw one way to remove noise and compress, and reconstruct signals. PCA is a transform that is customized to our unique signal, as opposed to the DFT which is generic for all signals, and its superior performance in these tasks reflects this.

We will see that all of the transforms we’ve learned thus far can be written as matrix multiplication. This means that “signal processing” is now just linear algebra! We will learn about random variables and probability (including mean and covariance), and see how these concepts inform PCA. We will learn how PCA uses dimensionality reduction to compress signals, and we will see why its performance is superior to the DFT.

We will see that PCA allows us to do things that couldn’t be done with the DFT or DCT, such as generate a face recognition algorithm. We will learn how the algorithm works here, and implement it for ourselves in lab.