Thursday, February 21, 2019

Lecture 7 (Feb 22)

Matrix representation of a linear transformation using the elementary basis of $\mathbb{R}^{M\times N}$. Compact representation of $w(s,t)$.

Box filter as an example. $TV(w * g)\le TV(g)$.

Edge detection
$$
g(x,y)=f(x,y)+c \Delta f(x,y) \, ,
$$
with $c<0$.


Reading materials

Lecture Notes: p.45-49.

Monday, February 18, 2019

HW2

Problems:
#5.16. 18, 21, 22, 23, 26
#6.1, 2 [Fig2.21(a)]

Problems: [click here]
Due: Mar 4 (Monday)
Submit: To the TA in the tutorial

Lecture 5+6 (Feb 18)

Normalized histogram

$h(r_i)=n_i$ represents the number of pixels in the image with intensity $r_i$. $p(r_i)=n_i/MN$ represents the percentage of pixels with the intensity $r_i$ such that $\sum_{i} p(i)=1$.

Histogram equalization

Given the normalized histogram $p_r(r_i)$, want to determine a transformation $s_i=T(r_i)$ for $i=0,1,\cdots,L-1$ such that $p_s(s_i)$ follows the uniform distribution
$$
\Rightarrow s=T(r)=(L-1) \int_0^r p_r(w) dw \, .
$$
In practice where image intensity is an integer, we have
$$
s_i=\mbox{round}\left[ (L-1) \sum_{j=0}^{i} p_r(r_i) \right] \, .
$$

Additive noise model

$g(x,y)=f(x,y)+n(x,y)$ where $f$ is the unknown clean image, $g(x,y)$ is the observed image and $n(x,y)$ is the noise. We assume $n(x,y)$ is a random variable which follows some distribution.

Gaussian noise, uniform noise, salt-and-pepper noise.

Filtering

Convolution
$$
g(x,y)= \sum_s \sum_t w(s,t) f(x-s,y-t) \, .
$$

Properties:
(i) Linear;
(ii) Convolution is commutative (ignoring the boundary effect).

For most image applications, we assume $w(s,t)$ is compact (only non-zero in a small neighborhood of $(x,y)$) and is symmetric.

Matrix representation of a linear transformation using the elementary basis of $\mathbb{R}^{M\times N}$. Compact representation of $w(s,t)$.

Reading Materials
Lecture notes: p.36-45
Slides: [click here]

Thursday, February 14, 2019

Lecture 4 (Feb 15)

Relationship to image inpainting
Taking the mean, mode and median of the neighboring intensities:
$E_2(g) =  \sum_{i=1}^N \left| f_i-g \right|^2$, $E_0(g) =  \sum_{i=1}^N \left| f_i-g \right|^0$ and $E_1(g) =  \sum_{i=1}^N \left| f_i-g \right|^1$.


Intensity transformation [$s=T(r)$]: most are application/image dependent.

Reading Materials
Lecture notes: p.31-35
Slides: [click here]

Monday, February 11, 2019

Lecture 3 (Feb 11)

Image Interpolation

Properties of the piecewise linear interpolation:
1. Will not create new global extrema, i.e. $\max_{x\in[x_1,x_M]} p(x) = \max_i f_i$ and  $\min_{x\in[x_1,x_M]} p(x) = \min_i f_i$.
2. $TV(\mathbf{g})=TV(\mathbf{f})$ where $TV(\cdot)$ is the total variation of the signal.
3. It is a linear transformation from $\mathbb{R}^M$ to $\mathbb{R}^{2M-1}$.

Two dimensional (image) interpolation: bilinear interpolation.

Relationship to image inpainting
Taking the mean, mode and median of the neighboring intensities:
$E_2(g) =  \sum_{i=1}^N \left| f_i-g \right|^2$, $E_0(g) =  \sum_{i=1}^N \left| f_i-g \right|^0$ and $E_1(g) =  \sum_{i=1}^N \left| f_i-g \right|^1$.

Reading materials

Lecture notes: p.29-31

Thursday, February 7, 2019

HW1

Problems:
#5.9, 10, 11, 14, 15

Problems: [click here]
Due: Feb 25 (Monday)
Submit: To the TA in the tutorial

Lecture 2 (Feb 8)

Problems about the simple imaging model

Sampling: $\Omega \Rightarrow \Omega'=\{(x,y),x=1,2,\cdots,M, y=1,2,\cdots,N\}$.

Quantization: $[0,L-1] \Rightarrow \{0,1,2,\cdots,L-1\}$. MATLAB data type: uint8 and uint16.

Image interpolation

Given a signal $(x_i,f(x_i))$ for $i=1,2,\cdots,M$, we want to determine an approximation to $f(x^*)$ for $x^* \ne x_i$. Using
(i) High order polynomial interpolation: $p(x) \in P^{M-1}$ such that $p(x_i)=f(x_i) \Rightarrow$ one needs to invert a BIG matrix. This can be partially fixed by better basis for interpolation, such as Lagrange Interpolating Polynomials or Newton's Divided Difference.
(ii) Piecewise polynomial interpolation: determine $p_i(x) \in P^N$ for some $N \ll M-1$ such that the polynomial defined only piecewisely for $x \in [x_i,x_{i+1}]$ for $i=1,\cdots,M-1$. Then
$$
p(x) = \left\{
\begin{array}{cc}
p_1(x) & \mbox{ if $x\in[x_1,x_2]$} \\
p_2(x) & \mbox{ if $x\in[x_2,x_3]$} \\
\vdots & \\
p_{M-1}(x) & \mbox{ if $x\in[x_{M-1},x_M]$.}
\end{array}
\right.
$$
In particular, if $N=1$, the interpolation is called a piecewise linear interpolation.

Some properties of the piecewise linear interpolation

Properties of the piecewise linear interpolation:
1. Will not create new global extrema, i.e. $\max_{x\in[x_1,x_M]} p(x) = \max_i f_i$ and  $\min_{x\in[x_1,x_M]} p(x) = \min_i f_i$.
2. $TV(\mathbf{g})=TV(\mathbf{f})$ where $TV(\cdot)$ is the total variation of the signal.
3. It is a linear transformation from $\mathbb{R}^M$ to $\mathbb{R}^{2M-1}$.

Reading materials
Lecture notes: p.21-22,25-29
Presentation file: [click here]