Monday, February 18, 2019

Lecture 5+6 (Feb 18)

Normalized histogram

$h(r_i)=n_i$ represents the number of pixels in the image with intensity $r_i$. $p(r_i)=n_i/MN$ represents the percentage of pixels with the intensity $r_i$ such that $\sum_{i} p(i)=1$.

Histogram equalization

Given the normalized histogram $p_r(r_i)$, want to determine a transformation $s_i=T(r_i)$ for $i=0,1,\cdots,L-1$ such that $p_s(s_i)$ follows the uniform distribution
$$
\Rightarrow s=T(r)=(L-1) \int_0^r p_r(w) dw \, .
$$
In practice where image intensity is an integer, we have
$$
s_i=\mbox{round}\left[ (L-1) \sum_{j=0}^{i} p_r(r_i) \right] \, .
$$

Additive noise model

$g(x,y)=f(x,y)+n(x,y)$ where $f$ is the unknown clean image, $g(x,y)$ is the observed image and $n(x,y)$ is the noise. We assume $n(x,y)$ is a random variable which follows some distribution.

Gaussian noise, uniform noise, salt-and-pepper noise.

Filtering

Convolution
$$
g(x,y)= \sum_s \sum_t w(s,t) f(x-s,y-t) \, .
$$

Properties:
(i) Linear;
(ii) Convolution is commutative (ignoring the boundary effect).

For most image applications, we assume $w(s,t)$ is compact (only non-zero in a small neighborhood of $(x,y)$) and is symmetric.

Matrix representation of a linear transformation using the elementary basis of $\mathbb{R}^{M\times N}$. Compact representation of $w(s,t)$.

Reading Materials
Lecture notes: p.36-45
Slides: [click here]

No comments:

Post a Comment