Sunday, March 31, 2019

HW4

Problems:
#5.43, 44, 45, 48, 49, 51, 55, 58
#6.10 (Moved to HW5)

Problems: [click here]
Due: Apr 15 (Monday)
Submit: To the TA in the tutorial

Monday, March 25, 2019

Lecture 15 (Mar 25)

Delta function

$$
\mathcal{F}(1)=\delta(y)
$$

$$
\mathcal{F}(\delta)=1
$$

$$
\mathcal{F}(\cos 2\pi x) = \frac{1}{2} [ \delta(y-1)+\delta(y+1)]
$$

Some properties of Fourier Transform

1)
$$

\mathcal{F}(f(ax+b))=\frac{1}{a} e^{\frac{2\pi i by}{a}} \hat{f} \left( \frac{y}{a} \right)
$$
2)
$$

\mathcal{F}(f')=2\pi i y \mathcal{F}(f)
$$
3) Plancherel Identity
$$
\int |f(x)|^2 dx = \int |\hat{f}(y)|^2 dy
$$
4) $\hat{f}(0)=\int f(x) dx=$ are area under the function.

Convolution

$$
f*g(x)=\int_{-\infty}^{\infty} f(x-y) g(y) dy
$$

Properties:
1) Commutative
2) Linear

Convolution Theorem

$\mathcal{F}(f*g)(s)= \hat{f}(s) \cdot \hat{g}(s)$.

Reading Materials


Lecture Notes: p. 82-85, 88-90

Thursday, March 21, 2019

Lecture 14 (Mar 22)

Expansions on other intervals

Given a periodic function $f(x)$ on an interval $[-L,L]$, we have
$$
f(x)= \frac{a_0}{2} + \sum_{n=1}^{\infty} \left[ a_n \cos \frac{n\pi x}{L}+ b_n \sin \frac{n\pi x}{L} \right]
$$
where
$$
a_n = \left<f(x),\cos \frac{n\pi x}{L} \right> \, \mbox{ and } \, b_n = \left<f(x),\sin \frac{n\pi x}{L} \right>
$$
and $<f,g>=\frac{1}{L} \int_{-L}^L f(x) g(x) dx$.

Fourier transform

$$
\mathcal{F}(f)(y)= \hat{f}(y) = \int_{-\infty}^{\infty} f(x) e^{-2\pi i xy} dx
$$
and
$$
\mathcal{F}^{-1}(\hat{f}(x)) = f(x) = \int_{-\infty}^{\infty} \hat{f}(y) e^{2\pi i xy} dy \, .
$$

Fourier spectrum

$|\hat{f}(y)|$.

Examples

$$
f(x)=\left\{
\begin{array}{c}
A \mbox{ for $-x_0<x<x_0$} \\
0 \mbox{ otherwise.}
\end{array}
\right.
$$
$$
f(x)=\left\{
\begin{array}{c}
e^{-ax} \mbox{ for $x>0$} \\
0 \mbox{ otherwise.}
\end{array}
\right.
$$

Some difficulties: what are $\mathcal{F}(1)$ and $\mathcal{F}(\cos 2\pi x)$?


Reading Materials


Lecture Notes p.80-82

Monday, March 18, 2019

Lecture 13 (Mar 18)

Fourier Series

Consider
$$
\mathcal{B}=\left\{ \frac{1}{\sqrt{2}} , \cos x, \cos 2x, \cdots, \sin x, \sin 2x , \cdots \right\} \, .
$$

With respect to the inner product
$$
<f,g> = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) g(x) dx \, ,
$$
$\mathcal{B}$ is orthonormal.

Fourier series:
$$
f(x)=\frac{a_0}{2} + \sum_{n=1}^{\infty} ( a_n \cos nx + b_n \sin nx ) \, ,
$$
where
$$
a_0=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) dx \, , \, a_n=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos nx dx \, , \, b_n=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin nx dx \, .
$$

Example: $f(x)=x$ on $x\in(-\pi,\pi)$ with periodic extension and is normalized. The partial sum of the Fourier series is given by
$$
s_N(x)= 2\sum_{n=1}^N \frac{(-1)^{n+1}\sin nx}{n} \, .
$$

Pointwise convergence

Given $x^*$, $\forall \epsilon>0, \exists N$ such that $n>N$ we have
$$
|s_n(x^*)-f(x^*)|<\epsilon \, .
$$

But, let $x_N=\frac{2N-1}{2N+1} \pi$, we have
$$
s_N(x_N)-f(x_N) > 0.5622 \, .
$$
This implies: $\forall N, \exists x_N$ such that
$$
|s_N(x_N)-f(x_N)| > 8.9\% \mbox{ of the jump.}
$$

Complex form of Fourier Series

$$
f(x)=\sum_{n=-\infty}^{\infty} c_n e^{inx}
$$
where
$c_n=<f(x),e^{inx}>=\frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) e^{-inx} dx$ or $c_n=\frac{a_n-ib_n}{2}$ and $c_{-n}=\frac{a_n+ib_n}{2}$.

Def (Spectrum): $|c_n|$ vs. $n$.

Reading materials


Lecture Notes p.71-80

Midterm (Announcement 2)

Date: Mar 29 (Friday)
Time: 12noon-1pm
Venue: Lecture Room

Materials covered up to, and including, "Why 1-norm minimization promotes sparsity?" from Lecture 12, i.e. Lecture Notes p.69 and HW3.

Project Mid-Report

Mid report due: Apr 14 (Sunday) 1159pm. (3% of your course grade)

Apart from the proposals on this page [click here], you are also encouraged to design your own project. I am very happy to discuss with you on your preference. But please come talk to me by Apr 4 (Thur).

Mid report


Please form a group of 1-3 students.

The requirement for the mid report is no more than three (3) pages in the LaTex/MSWord format.

Your report should contain a clear goal of the project. You can include some background materials on the topic. You should also state some possible reference papers, books, webpage, etc that you are going to use in your work.

You are encouraged to discuss with me by Apr 4 (Thur) to confirm your project so you have around a week to work on the report.

Thursday, March 14, 2019

Lecture 12 (Mar 15)

Example on low rank matrix completion.

Why 1-norm minimization promotes sparsity?

Fourier Series

Consider
$$
\mathcal{B}=\left\{ \frac{1}{\sqrt{2}} , \cos x, \cos 2x, \cdots, \sin x, \sin 2x , \cdots \right\} \, .
$$

With respect to the inner product
$$
<f,g> = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) g(x) dx \, ,
$$
$\mathcal{B}$ is orthonormal.

Fourier series:
$$
f(x)=\frac{a_0}{2} + \sum_{n=1}^{\infty} ( a_n \cos nx + b_n \sin nx ) \, ,
$$
where
$$
a_0=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) dx \, , \, a_n=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos nx dx \, , \, b_n=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin nx dx \, .
$$

Example: $f(x)=x$ on $x\in(-\pi,\pi)$ with periodic extension and is normalized. The partial sum of the Fourier series is given by
$$
s_N(x)= 2\sum_{n=1}^N \frac{(-1)^{n+1}\sin nx}{n} \, .
$$

Reading materials


Lecture Notes: p.67-71

Monday, March 11, 2019

Lecture 10/11 (Mar 11)

Thm (Error in the low rank approximation): For any $\nu$ with $0\le \nu \le r=$rank$(A)$, let
$$
A_{\nu} = \sum_{j=1}^{\nu} \sigma_j \mathbf{u}_j \mathbf{v}_j^* \, .
$$
Then
\begin{eqnarray*}
\|A-A_{\nu}\|_2 &=& \mbox{inf}_{B\in \mathbb{R}^{M\times N}, \mbox{rank}(B)\le \nu} \|A-B\|_2 = \sigma_{\nu+1} \, , \\
\|A-A_{\nu}\|_F &=& \mbox{inf}_{B\in \mathbb{R}^{M\times N}, \mbox{rank}(B)\le \nu} \|A-B\|_F = \sqrt{\sigma_{\nu+1}+\cdots+\sigma_r^2} \, .
\end{eqnarray*}

Texture Image Inpainting using low rank matrix completion


Netflix problem: predict user rating for movies.

Reading materials

Lecture Notes: p.59-67

Thursday, March 7, 2019

Midterm

Date: Mar 29 (Friday)
Time: 12noon-1pm
Venue: Lecture Room

Materials covered up to, and including, lecture on Mar 22 (Friday).

Lecture 9 (Mar 8)

Def ($N$ singular values of $A$): length of the $N$ principal semiaxes of $AS$, i.e. $\sigma_1, \sigma_2, \cdots, \sigma_N$ such that $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_N >0$.

Def ($N$ left singular vectors of $A$): $\{ u_1, u_2, \cdots, u_N \}$ orthonormal, principal semiaxes directions.

Def ($N$ right singular vectors of $A$): $\{ v_1, v_2, \cdots, v_N \}$, preimage of $\sigma_i u_i$. It turns out these vectors are orthonormal.

Reduced SVD:
$$
A=\hat{U} \hat{\Sigma} V^*
$$
where $\hat{U}\in\mathbb{R}^{M\times N}$, $\hat{\Sigma}\in\mathbb{R}^{N\times N}$ and $V\in\mathbb{R}^{N\times N}$.

Full SVD

$$
A=U \Sigma V^*
$$
where $U\in\mathbb{R}^{M\times M}$, $\Sigma\in\mathbb{R}^{M\times N}$ and $V\in\mathbb{R}^{N\times N}$.

Properties:
1. $v_i$: eigenvectors of $A^*A$.
2. $u_i$: eigenvectors of $AA^*$.
3. Nonzero singular values are the square roots of the nonzero eigenvalues of $A^*A$ or $AA^*$.
4. Rank$(A)=r=$ the number of nonzero singular values.
5. Range$(A)=$span$(u_1,u_2,\cdots,u_r)$.
6. Null$(A)=$span$(v_{r+1},v_{r+2},\cdots,v_N)$.
7. $A$ is the sum of $r$ rank-one matrices,
$$
A=\sum_{i=1}^r \sigma_i u_i v_i^* \, .
$$

Low rank approximation of $A$

For any $\nu$ such that $0\le \nu \le r$, define
$$
A_{\nu}=\sum_{i=1}^{\nu} \sigma_i u_i v_i^*
$$
is an rank-$\nu$ approximation of $A$.

Image Compression using SVD

Vector norm

A function $\| \cdot \|: \mathbb{R}^m \rightarrow \mathbb{R}$ such that
(1) $\|x\| \ge 0$, $\| x\|=0$ only if $x=0$;
(2) $\|x+y\| \ge \|x\| + \|y\|$;
(3) $\|\alpha x\| = |\alpha| \cdot \|x\|$.

Examples: $\|x\|_1$, $\|x\|_2$, $\|x\|_p$, $\|x\|_{\infty}$.

Matrix norm induced by vector norm

Def: $\|A\|$: the smallest $C\in \mathbb{R}$ such that
$$
\|A x\| \le C \|x\|
$$
for all $x\in \mathbb{R}^m$. Or,
$$
\|A\| = \mbox{sup}_{x\in \mathbb{R}^m, x\ne 0} \frac{ \|Ax\|}{\|x\|} = \mbox{sup}_{\|x\|=1, x \in \mathbb{R}^m} \|Ax\| \, .
$$

Reading materials

Lecture notes: p.56-60.

Wednesday, March 6, 2019

HW3

Problems:
#5.29,35,39,40,41
#6.5[the circuit board image: click here],6,7[Fig 5.26(a)],8

Problems: [click here]
Due: Mar 25 (Monday)
Submit: To the TA in the tutorial

Monday, March 4, 2019

Lecture 8 (Mar 4)

Nonlinear Transforms

Denoising: Median filter.

Two different norms for the gradient in two dimensions:
$$
\|\nabla f\|_1 = \left| \frac{\partial f}{\partial x} \right| + \left| \frac{\partial f}{\partial y} \right| \\

\|\nabla f\|_2 = \left[ \left( \frac{\partial f}{\partial x} \right)^2 + \left(\frac{\partial f}{\partial y} \right)^2 \right]^{\frac{1}{2}}
$$

Eigenvalue Decomposition (EVD)

If $A$ is not square, there is no EVD.

Even if it's square, there is no guarantee that it has a complete set of e-vectors.

Singular Value Decomposition (SVD)

Geometrical observation: The image of the unit sphere under any $M\times N$ matrix is a hyperellipse.

Assume $A\in \mathbb{R}^{M\times N}$ is full rank, $M\ge N$.

Def ($N$ singular values of $A$): length of the $N$ principal semiaxes of $AS$, i.e. $\sigma_1, \sigma_2, \cdots, \sigma_N$ such that $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_N >0$.

Def ($N$ left singular vectors of $A$): $\{ u_1, u_2, \cdots, u_N \}$ orthonormal, principal semiaxes directions.

Def ($N$ right singular vectors of $A$): $\{ v_1, v_2, \cdots, v_N \}$, preimage of $\sigma_i u_i$. It turns out these vectors are orthonormal.

Reduced SVD:
$$
A=\hat{U} \hat{\Sigma} V^*
$$
where $\hat{U}\in\mathbb{R}^{M\times N}$, $\hat{\Sigma}\in\mathbb{R}^{N\times N}$ and $V\in\mathbb{R}^{N\times N}$.

Reading materials

Lecture Notes: pp.50-57