diff --git a/presentation/finalpresentation.Rmd b/presentation/finalpresentation.Rmd index 32b1717..5a4e6b3 100644 --- a/presentation/finalpresentation.Rmd +++ b/presentation/finalpresentation.Rmd @@ -114,13 +114,13 @@ knitr::include_graphics("pimg/zhu.png") # Our Approach: Main Idea -* We have used parametric models to estimate level of blur as surrogate for depth. +* Parametric models to estimate level of blur as surrogate for depth. -* Instead of doing post estimation segmentation, we will start with segmented image. +* Instead of doing post estimation segmentation, start with pre-segmented image. -* We estimate blur (depth) for each segment separately. +* Estimate blur (depth) for each segment separately. -* Modern segmentation algorithms such as **Segment-Anything** can be used for this. +* Use of Modern segmentation algorithms such as **Segment-Anything**. ```{r ,warning=FALSE,echo=FALSE,out.width='35%',fig.align='center',echo=FALSE,fig.cap="Figure: Segmented Image by SAM"} @@ -133,6 +133,8 @@ knitr::include_graphics("pimg/seg1.png") * When light rays spread from a point source and hit the camera lens, they should ideally refract and converge on the corresponding pixel of the original scene. +-- + * However, if the source is out of focus, the refracted rays spread out over neighboring pixels as well. * This spreading pattern is called the Point Spread Function (PSF) or Blur Kernel. @@ -188,17 +190,6 @@ Where, -- -```{r ,warning=FALSE,echo=FALSE,out.width='50%',fig.align='center',echo=FALSE,fig.cap= "Figure: Spatially Varying Blur Kernel"} - -knitr::include_graphics("pimg/svarying.png") -``` - ---- - -# Model for Blurred Image - -* Based on this observation we redefine our model for spatially varying case. - * We assume that $\boldsymbol{k_t}$ is shift invariant in a neighborhood ${\boldsymbol{\eta_t}}$ of size $p_1(\boldsymbol{t}) \times p_2(\boldsymbol{t})$ containing $\boldsymbol{t}$. * Based on this assumption, our model for *spatially varying blur* is given by - @@ -222,6 +213,10 @@ Where, # Proposed Parametric Models for Blur Kernel +* In the case of blurring due to defocus, shape of the blur kernel is **circular** and controls the level of blur. + +-- + * **Uniform distribution** across a circular are defined by the radius of the circle, denoted by $r$. $$k(x,y) = \frac{1}{\pi r^2} \times \text{I}_{\{x^2 + y^2 \ \leq \ r^2\}}$$ diff --git a/presentation/finalpresentation.html b/presentation/finalpresentation.html index fb394c4..e5a5740 100644 --- a/presentation/finalpresentation.html +++ b/presentation/finalpresentation.html @@ -124,13 +124,13 @@ # Our Approach: Main Idea -* We have used parametric models to estimate level of blur as surrogate for depth. +* Parametric models to estimate level of blur as surrogate for depth. -* Instead of doing post estimation segmentation, we will start with segmented image. +* Instead of doing post estimation segmentation, start with pre-segmented image. -* We estimate blur (depth) for each segment separately. +* Estimate blur (depth) for each segment separately. -* Modern segmentation algorithms such as **Segment-Anything** can be used for this. +* Use of Modern segmentation algorithms such as **Segment-Anything**. <div class="figure" style="text-align: center"> <img src="pimg/seg1.png" alt="Figure: Segmented Image by SAM" width="35%" /> @@ -143,6 +143,8 @@ * When light rays spread from a point source and hit the camera lens, they should ideally refract and converge on the corresponding pixel of the original scene. +-- + * However, if the source is out of focus, the refracted rays spread out over neighboring pixels as well. * This spreading pattern is called the Point Spread Function (PSF) or Blur Kernel. @@ -198,17 +200,6 @@ -- -<div class="figure" style="text-align: center"> -<img src="pimg/svarying.png" alt="Figure: Spatially Varying Blur Kernel" width="50%" /> -<p class="caption">Figure: Spatially Varying Blur Kernel</p> -</div> - ---- - -# Model for Blurred Image - -* Based on this observation we redefine our model for spatially varying case. - * We assume that `\(\boldsymbol{k_t}\)` is shift invariant in a neighborhood `\({\boldsymbol{\eta_t}}\)` of size `\(p_1(\boldsymbol{t}) \times p_2(\boldsymbol{t})\)` containing `\(\boldsymbol{t}\)`. * Based on this assumption, our model for *spatially varying blur* is given by - @@ -232,6 +223,10 @@ # Proposed Parametric Models for Blur Kernel +* In the case of blurring due to defocus, shape of the blur kernel is **circular** and controls the level of blur. + +-- + * **Uniform distribution** across a circular are defined by the radius of the circle, denoted by `\(r\)`. `$$k(x,y) = \frac{1}{\pi r^2} \times \text{I}_{\{x^2 + y^2 \ \leq \ r^2\}}$$`