Skip to content

Commit

Permalink
Final Presentation Commit
Browse files Browse the repository at this point in the history
  • Loading branch information
ShrayanRoy committed May 21, 2024
1 parent c90aee3 commit 5fdeb0e
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 63 deletions.
51 changes: 21 additions & 30 deletions presentation/finalpresentation.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,8 @@ knitr::include_graphics("pimg/sigexpall.png")

# How to estimate Spatially Varying blur ?

* Real world images often exhibit spatially varying blur implying pixel to pixel varying blur level.

--

```{r ,warning=FALSE,echo=FALSE,out.width='44%',fig.align='center',echo=FALSE}
Expand All @@ -144,6 +146,8 @@ knitr::include_graphics("pimg/moon.png")

# How to estimate Spatially Varying blur ?

* Real world images often exhibit spatially varying blur implying pixel to pixel varying blur level.

```{r ,warning=FALSE,echo=FALSE,out.width='44%',fig.align='center',echo=FALSE}
knitr::include_graphics("pimg/moon_kern.png")
Expand All @@ -153,8 +157,6 @@ knitr::include_graphics("pimg/moon_kern.png")

# Spatially Varying Blur Estimation

* Real world images often exhibit spatially varying blur implying pixel to pixel varying blur level.

* Assume locally constant blur within a small patch around each pixel.

--
Expand Down Expand Up @@ -227,7 +229,7 @@ knitr::include_graphics("pimg/couple.png")

* Obvious solution is to pad or fill with zeros to make rectangular array.

```{r ,warning=FALSE,echo=FALSE,out.width='60%',out.height="40%",fig.align='center',echo=FALSE,fig.cap= "Figure: Zero padding to make rectangular array"}
```{r ,warning=FALSE,echo=FALSE,out.width='65%',out.height="40%",fig.align='center',echo=FALSE}
knitr::include_graphics("pimg/zero_pad.png")
```
Expand Down Expand Up @@ -346,7 +348,7 @@ $$L(\boldsymbol{\theta}) \propto \frac{1}{\sqrt{\text{det}(\sigma^2 \boldsymbol{

# Decorrelation Loss

* For an observed blurred image $\boldsymbol{y}$, suppose the true blur kernel is $\boldsymbol{k_{\theta_0}}$.
* For an observed blurred image gradient $\boldsymbol{y}$, suppose the true blur kernel is $\boldsymbol{k_{\theta_0}}$.

--

Expand All @@ -356,10 +358,6 @@ $$L(\boldsymbol{\theta}) \propto \frac{1}{\sqrt{\text{det}(\sigma^2 \boldsymbol{

--

* Because, convolution using $\boldsymbol{k}$ increases the correlation among image gradients.

--

* We simulated defocus blur on a $255\times 255$ image using disc kernel with $r_{true} = 3$.

* Plotted ACF of both horizontal and vertical gradients after deconvoluting using $r = 2,3,4$.
Expand All @@ -386,17 +384,11 @@ knitr::include_graphics("pimg/decorr2.png")

# Decorrelation Loss

* For deconvolution step we used conditional mean of $\boldsymbol{X_{\omega}}$ given $\boldsymbol{Y_{\omega}}$ and $\boldsymbol{K_{\omega}}$ i.e.

$$\mathbb{E}[\boldsymbol{X_{\omega}}|\boldsymbol{Y_{\omega}},\boldsymbol{K_{\omega}}] = \frac{\sigma^2g_{\omega}\boldsymbol{K_{\omega}Y_{\omega}}}{\eta^2h_{\omega} + \sigma^2g_{\omega}|\boldsymbol{K_{\omega}}|^2}$$

* Take Inverse DFT to obtain deconvoluted horizontal and vertical gradients denoted by $\boldsymbol{x_{h,\theta}}$ and $\boldsymbol{x_{v,\theta}}$.

--

* Consider sum of squared ACF in both along and across direction of gradients as loss function denoted by

$$\text{Decorr}(\boldsymbol{\theta}) = \text{Decorr}(\boldsymbol{x_{h,\theta}},\boldsymbol{x_{v,\theta}})$$
$\ \ \ \ \text{ }$ Where, $\boldsymbol{x_{h,\theta}}$ and $\boldsymbol{x_{v,\theta}}$ denotes deconvoluted horizontal and vertical gradients respectively.

* It captures the amount of correlation present in image gradients after deblurring using $\boldsymbol{k_\theta}$.

Expand All @@ -405,6 +397,11 @@ $$\text{Decorr}(\boldsymbol{\theta}) = \text{Decorr}(\boldsymbol{x_{h,\theta}},\
* $\text{Decorr}(\boldsymbol{\theta})$ should be minimum for true value of $\boldsymbol{\theta}$.

$$\hat{\boldsymbol{\theta}} = \underset{\boldsymbol{\theta}}{\text{argmin}} \ \text{Decorr}(\boldsymbol{x_{h,\theta}},\boldsymbol{x_{v,\theta}})$$
--

* For deconvolution step we used conditional mean of $\boldsymbol{X_{\omega}}$ given $\boldsymbol{Y_{\omega}}$ and $\boldsymbol{K_{\omega}}$ i.e.

$$\mathbb{E}[\boldsymbol{X_{\omega}}|\boldsymbol{Y_{\omega}},\boldsymbol{K_{\omega}}] = \frac{\sigma^2g_{\omega}\boldsymbol{K_{\omega}Y_{\omega}}}{\eta^2h_{\omega} + \sigma^2g_{\omega}|\boldsymbol{K_{\omega}}|^2}$$

---

Expand Down Expand Up @@ -512,15 +509,7 @@ knitr::include_graphics("pimg/refocus1.png")

---

# Discussion

* We proposed a new method to estimate spatially varying blur based on single image of the scene.

--

* Due to issue of zero padding with ML estimation, we developed an ad hoc procedure based on Decorrelation Loss.

--
# Comment

* Estimated blur maps clearly depict differences in blur levels & boundaries between objects in image.

Expand All @@ -534,17 +523,15 @@ knitr::include_graphics("pimg/refocus1.png")

--

* Choice of $\kappa$ can be incorporated in estimation procedure.
* Further improvement is possible by choosing suitable tuning parameter.

--

* Hard to isolate effects of tuning parameter because of their confounding effect.

---

# Discussion
# Computation time

* A significant reduction in computing time.
* This results in a significant reduction in computing time.

* Because instead of estimating for all pixel separately we are estimating only for segments.

Expand All @@ -554,9 +541,11 @@ knitr::include_graphics("pimg/refocus1.png")

* SAM takes around 3 minutes for segmentation on GPU run time.

* Pixel by pixel blur estimation takes around 1 hr 30 minutes only for initial step.

--

* Pixel by pixel blur estimation takes around 1 hr 30 minutes only for initial step.
* A R package implementing our estimation procedure is available on github : [DepthR](https://github.com/ShrayanRoy/DepthR)

---

Expand All @@ -577,6 +566,8 @@ https://digitalcommons.isical.ac.in/doctoral-theses/7/

* Xiang Zhu et al. “Estimating Spatially Varying Defocus Blur From A Single Image”. In: (2013). issn: 1941-0042. url: http://dx.doi.org/10.1109/TIP.2013.2279316.

* William Hadley Richardson. “Bayesian-Based Iterative Method of Image Restoration". In: (1972). doi: http://dx.doi.org/10.1145/1276377.127646

---

class: center, middle
Expand Down
54 changes: 21 additions & 33 deletions presentation/finalpresentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,8 @@

# How to estimate Spatially Varying blur ?

* Real world images often exhibit spatially varying blur implying pixel to pixel varying blur level.

--

<img src="pimg/moon.png" width="44%" style="display: block; margin: auto;" />
Expand All @@ -145,14 +147,14 @@

# How to estimate Spatially Varying blur ?

* Real world images often exhibit spatially varying blur implying pixel to pixel varying blur level.

<img src="pimg/moon_kern.png" width="44%" style="display: block; margin: auto;" />

---

# Spatially Varying Blur Estimation

* Real world images often exhibit spatially varying blur implying pixel to pixel varying blur level.

* Assume locally constant blur within a small patch around each pixel.

--
Expand Down Expand Up @@ -225,10 +227,7 @@

* Obvious solution is to pad or fill with zeros to make rectangular array.

<div class="figure" style="text-align: center">
<img src="pimg/zero_pad.png" alt="Figure: Zero padding to make rectangular array" width="60%" height="40%" />
<p class="caption">Figure: Zero padding to make rectangular array</p>
</div>
<img src="pimg/zero_pad.png" width="65%" height="40%" style="display: block; margin: auto;" />

--

Expand Down Expand Up @@ -344,7 +343,7 @@

# Decorrelation Loss

* For an observed blurred image `\(\boldsymbol{y}\)`, suppose the true blur kernel is `\(\boldsymbol{k_{\theta_0}}\)`.
* For an observed blurred image gradient `\(\boldsymbol{y}\)`, suppose the true blur kernel is `\(\boldsymbol{k_{\theta_0}}\)`.

--

Expand All @@ -354,10 +353,6 @@

--

* Because, convolution using `\(\boldsymbol{k}\)` increases the correlation among image gradients.

--

* We simulated defocus blur on a `\(255\times 255\)` image using disc kernel with `\(r_{true} = 3\)`.

* Plotted ACF of both horizontal and vertical gradients after deconvoluting using `\(r = 2,3,4\)`.
Expand All @@ -381,17 +376,11 @@

# Decorrelation Loss

* For deconvolution step we used conditional mean of `\(\boldsymbol{X_{\omega}}\)` given `\(\boldsymbol{Y_{\omega}}\)` and `\(\boldsymbol{K_{\omega}}\)` i.e.

`$$\mathbb{E}[\boldsymbol{X_{\omega}}|\boldsymbol{Y_{\omega}},\boldsymbol{K_{\omega}}] = \frac{\sigma^2g_{\omega}\boldsymbol{K_{\omega}Y_{\omega}}}{\eta^2h_{\omega} + \sigma^2g_{\omega}|\boldsymbol{K_{\omega}}|^2}$$`

* Take Inverse DFT to obtain deconvoluted horizontal and vertical gradients denoted by `\(\boldsymbol{x_{h,\theta}}\)` and `\(\boldsymbol{x_{v,\theta}}\)`.

--

* Consider sum of squared ACF in both along and across direction of gradients as loss function denoted by

`$$\text{Decorr}(\boldsymbol{\theta}) = \text{Decorr}(\boldsymbol{x_{h,\theta}},\boldsymbol{x_{v,\theta}})$$`
`\(\ \ \ \ \text{ }\)` Where, `\(\boldsymbol{x_{h,\theta}}\)` and `\(\boldsymbol{x_{v,\theta}}\)` denotes deconvoluted horizontal and vertical gradients respectively.

* It captures the amount of correlation present in image gradients after deblurring using `\(\boldsymbol{k_\theta}\)`.

Expand All @@ -400,6 +389,11 @@
* `\(\text{Decorr}(\boldsymbol{\theta})\)` should be minimum for true value of `\(\boldsymbol{\theta}\)`.

`$$\hat{\boldsymbol{\theta}} = \underset{\boldsymbol{\theta}}{\text{argmin}} \ \text{Decorr}(\boldsymbol{x_{h,\theta}},\boldsymbol{x_{v,\theta}})$$`
--

* For deconvolution step we used conditional mean of `\(\boldsymbol{X_{\omega}}\)` given `\(\boldsymbol{Y_{\omega}}\)` and `\(\boldsymbol{K_{\omega}}\)` i.e.

`$$\mathbb{E}[\boldsymbol{X_{\omega}}|\boldsymbol{Y_{\omega}},\boldsymbol{K_{\omega}}] = \frac{\sigma^2g_{\omega}\boldsymbol{K_{\omega}Y_{\omega}}}{\eta^2h_{\omega} + \sigma^2g_{\omega}|\boldsymbol{K_{\omega}}|^2}$$`

---

Expand Down Expand Up @@ -489,15 +483,7 @@

---

# Discussion

* We proposed a new method to estimate spatially varying blur based on single image of the scene.

--

* Due to issue of zero padding with ML estimation, we developed an ad hoc procedure based on Decorrelation Loss.

--
# Comment

* Estimated blur maps clearly depict differences in blur levels & boundaries between objects in image.

Expand All @@ -511,17 +497,15 @@

--

* Choice of `\(\kappa\)` can be incorporated in estimation procedure.
* Further improvement is possible by choosing suitable tuning parameter.

--

* Hard to isolate effects of tuning parameter because of their confounding effect.

---

# Discussion
# Computation time

* A significant reduction in computing time.
* This results in a significant reduction in computing time.

* Because instead of estimating for all pixel separately we are estimating only for segments.

Expand All @@ -531,9 +515,11 @@

* SAM takes around 3 minutes for segmentation on GPU run time.

* Pixel by pixel blur estimation takes around 1 hr 30 minutes only for initial step.

--

* Pixel by pixel blur estimation takes around 1 hr 30 minutes only for initial step.
* A R package implementing our estimation procedure is available on github : [DepthR](https://github.com/ShrayanRoy/DepthR)

---

Expand All @@ -554,6 +540,8 @@

* Xiang Zhu et al. “Estimating Spatially Varying Defocus Blur From A Single Image”. In: (2013). issn: 1941-0042. url: http://dx.doi.org/10.1109/TIP.2013.2279316.

* William Hadley Richardson. “Bayesian-Based Iterative Method of Image Restoration". In: (1972). doi: http://dx.doi.org/10.1145/1276377.127646

---

class: center, middle
Expand Down

0 comments on commit 5fdeb0e

Please sign in to comment.