Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
mikoff committed Mar 24, 2024
1 parent 614022f commit 152d740
Show file tree
Hide file tree
Showing 20 changed files with 287 additions and 14 deletions.
2 changes: 1 addition & 1 deletion index.xml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Aleksandr Mikoff's blog</title><link>https://mikoff.github.io/</link><description>Recent content on Aleksandr Mikoff's blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sun, 24 Mar 2024 12:00:00 +0300</lastBuildDate><atom:link href="https://mikoff.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>Notes on Computer vision</title><link>https://mikoff.github.io/notes/computer-vision-notes/</link><pubDate>Sun, 24 Mar 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/notes/computer-vision-notes/</guid><description>Notes on Computer vision Link to heading I have made these notes while reading Computer vision: Models, Learning, and Inference book.
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Aleksandr Mikoff's blog</title><link>https://mikoff.github.io/</link><description>Recent content on Aleksandr Mikoff's blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sun, 24 Mar 2024 21:00:00 +0300</lastBuildDate><atom:link href="https://mikoff.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>MCMC sampling</title><link>https://mikoff.github.io/posts/mcmc-sampling.md/</link><pubDate>Sun, 24 Mar 2024 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/mcmc-sampling.md/</guid><description>Quite often we want to sample from distributions that have computationally untractable CDF. To draw samples from them the numerical procedures are used. In the following note I would like to demonstrate few approaches for one particular example: having 2D robot perception and a map we want to sample most probable poses of this robot. This problem often emerges during initialization or re-initialization of the estimated robot pose when the filter diverged or needs to be initialized from scratch and we want to guarantee the fast convergence.</description></item><item><title>Notes on Computer vision</title><link>https://mikoff.github.io/notes/computer-vision-notes/</link><pubDate>Sun, 24 Mar 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/notes/computer-vision-notes/</guid><description>Notes on Computer vision Link to heading I have made these notes while reading Computer vision: Models, Learning, and Inference book.
Coordinate systems notation Link to heading $\mathbf{R}_{wc}$ is a rotation matrix such that, after its application to the camera axes, they become collinear with the world axes. Some describe it as a matrix that rotates a vector from the camera coordinate system to the world coordinate system. However, this description can be slightly misleading, as vectors exist in space and are not physically rotated.</description></item><item><title>Probability density transform</title><link>https://mikoff.github.io/posts/probability-density-transform/</link><pubDate>Sat, 27 Jan 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/probability-density-transform/</guid><description>PDF transformations Link to heading While reading new [book]1 by Bishop I came across pdf transformation topic. It turned out to be counter-intuitive that we need not only transform the pdf by the selected function, but also multiply it by the derivative of the inverse function w.r.t. substituted variable. While delving into the details of this topic, I found the following sources to be quite useful: [2]2, [3]3, [4]4, [5]5.</description></item><item><title>Understanding deep learning: training, SGD, code samples</title><link>https://mikoff.github.io/posts/nn-training.md/</link><pubDate>Sun, 15 Oct 2023 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/nn-training.md/</guid><description>Recently, I have been reading a new [book]1 by S. Prince titled &amp;ldquo;Understanding Deep Learning.&amp;rdquo; While reading it, I made some notes and practiced with concepts that were described in great detail by the author. Having no prior experience in deep learning, I was fascinated by how clearly the author explains the concepts and main terms.
This post is:
a collection of keynotes from the first seven chapters of the book, that I have found useful for myself; the numpy-only implementation of deep neural network with variable layers size and training using SGD.</description></item><item><title>Likelihood and probability normalization, log-sum-exp trick</title><link>https://mikoff.github.io/posts/likelihood-and-log-sum-exp/</link><pubDate>Fri, 11 Aug 2023 23:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/likelihood-and-log-sum-exp/</guid><description>Working with probabilities involves multiplication and normalization of their values. Since the numerical values sometimes are extremely low that can lead to underflow problems. This problem is evident with particle filters - we have to multiply really low likelihood values that vanish in the end. Log-sum-exp allows to abbreviate this problem.
Expand Down
3 changes: 2 additions & 1 deletion posts/index.html
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
<!doctype html><html lang=en><head><title>Posts · Aleksandr Mikoff's blog
</title><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=color-scheme content="light dark"><meta name=author content="Aleksandr Mikoff"><meta name=description content><meta name=keywords content><meta name=twitter:card content="summary"><meta name=twitter:title content="Posts"><meta name=twitter:description content><meta property="og:title" content="Posts"><meta property="og:description" content><meta property="og:type" content="website"><meta property="og:url" content="https://mikoff.github.io/posts/"><link rel=canonical href=https://mikoff.github.io/posts/><link rel=preload href=/fonts/fa-brands-400.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-regular-400.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-solid-900.woff2 as=font type=font/woff2 crossorigin><link rel=stylesheet href=/css/coder.min.577e3c5ead537873430da16f0964b754a120fd87c4e2203a00686e7c75b51378.css integrity="sha256-V348Xq1TeHNDDaFvCWS3VKEg/YfE4iA6AGhufHW1E3g=" crossorigin=anonymous media=screen><link rel=stylesheet href=/css/coder-dark.min.a00e6364bacbc8266ad1cc81230774a1397198f8cfb7bcba29b7d6fcb54ce57f.css integrity="sha256-oA5jZLrLyCZq0cyBIwd0oTlxmPjPt7y6KbfW/LVM5X8=" crossorigin=anonymous media=screen><link rel=stylesheet href=/css/image.min.c1a5dfc6bac0eb1b85bcd8abf8aba0d18e0bf02fc972f9a0b17d2962f5ca8dd5.css integrity="sha256-waXfxrrA6xuFvNir+Kug0Y4L8C/JcvmgsX0pYvXKjdU=" crossorigin=anonymous media=screen><link rel=stylesheet href=/css/spoiler.min.bf901294afff95f520a8150a4df4249576eb9c49c4f40f5f9c2de750588dd594.css integrity="sha256-v5ASlK//lfUgqBUKTfQklXbrnEnE9A9fnC3nUFiN1ZQ=" crossorigin=anonymous media=screen><link rel=stylesheet href=/plugins/academic-icons/css/academicons.min.f6abb61f6b9b2e784eba22dfb93cd399ce30ee01825791830a2737d6bfcd2be9.css integrity="sha256-9qu2H2ubLnhOuiLfuTzTmc4w7gGCV5GDCic31r/NK+k=" crossorigin=anonymous media=screen><link rel=icon type=image/svg+xml href=/images/favicon.svg sizes=any><link rel=icon type=image/png href=/img/favicon-32x32.png sizes=32x32><link rel=icon type=image/png href=/img/favicon-16x16.png sizes=16x16><link rel=apple-touch-icon href=/images/apple-touch-icon.png><link rel=apple-touch-icon sizes=180x180 href=/images/apple-touch-icon.png><link rel=manifest href=/site.webmanifest><link rel=mask-icon href=/images/safari-pinned-tab.svg color=#5bbad5><link rel=alternate type=application/rss+xml href=/posts/index.xml title="Aleksandr Mikoff's blog"></head><body class="preload-transitions colorscheme-auto"><div class=float-container><a id=dark-mode-toggle class=colorscheme-toggle><i class="fa-solid fa-adjust fa-fw" aria-hidden=true></i></a></div><main class=wrapper><nav class=navigation><section class=container><a class=navigation-title href=https://mikoff.github.io/>Aleksandr Mikoff's blog
</a><input type=checkbox id=menu-toggle>
<label class="menu-button float-right" for=menu-toggle><i class="fa-solid fa-bars fa-fw" aria-hidden=true></i></label><ul class=navigation-list><li class=navigation-item><a class=navigation-link href=/about/>About</a></li><li class=navigation-item><a class=navigation-link href=/posts/>Posts</a></li><li class=navigation-item><a class=navigation-link href=/tags>Tags</a></li><li class=navigation-item><a class=navigation-link href=/notes/>Notes</a></li></ul></section></nav><div class=content><section class="container list"><header><h1 class=title><a class=title-link href=https://mikoff.github.io/posts/>Posts</a></h1></header><ul><li><span class=date>January 27, 2024</span>
<label class="menu-button float-right" for=menu-toggle><i class="fa-solid fa-bars fa-fw" aria-hidden=true></i></label><ul class=navigation-list><li class=navigation-item><a class=navigation-link href=/about/>About</a></li><li class=navigation-item><a class=navigation-link href=/posts/>Posts</a></li><li class=navigation-item><a class=navigation-link href=/tags>Tags</a></li><li class=navigation-item><a class=navigation-link href=/notes/>Notes</a></li></ul></section></nav><div class=content><section class="container list"><header><h1 class=title><a class=title-link href=https://mikoff.github.io/posts/>Posts</a></h1></header><ul><li><span class=date>March 24, 2024</span>
<a class=title href=/posts/mcmc-sampling.md/>MCMC sampling</a></li><li><span class=date>January 27, 2024</span>
<a class=title href=/posts/probability-density-transform/>Probability density transform</a></li><li><span class=date>October 15, 2023</span>
<a class=title href=/posts/nn-training.md/>Understanding deep learning: training, SGD, code samples</a></li><li><span class=date>August 11, 2023</span>
<a class=title href=/posts/likelihood-and-log-sum-exp/>Likelihood and probability normalization, log-sum-exp trick</a></li><li><span class=date>November 20, 2022</span>
Expand Down
2 changes: 1 addition & 1 deletion posts/index.xml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Aleksandr Mikoff's blog</title><link>https://mikoff.github.io/posts/</link><description>Recent content in Posts on Aleksandr Mikoff's blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sat, 27 Jan 2024 12:00:00 +0300</lastBuildDate><atom:link href="https://mikoff.github.io/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Probability density transform</title><link>https://mikoff.github.io/posts/probability-density-transform/</link><pubDate>Sat, 27 Jan 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/probability-density-transform/</guid><description>PDF transformations Link to heading While reading new [book]1 by Bishop I came across pdf transformation topic. It turned out to be counter-intuitive that we need not only transform the pdf by the selected function, but also multiply it by the derivative of the inverse function w.r.t. substituted variable. While delving into the details of this topic, I found the following sources to be quite useful: [2]2, [3]3, [4]4, [5]5.</description></item><item><title>Understanding deep learning: training, SGD, code samples</title><link>https://mikoff.github.io/posts/nn-training.md/</link><pubDate>Sun, 15 Oct 2023 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/nn-training.md/</guid><description>Recently, I have been reading a new [book]1 by S. Prince titled &amp;ldquo;Understanding Deep Learning.&amp;rdquo; While reading it, I made some notes and practiced with concepts that were described in great detail by the author. Having no prior experience in deep learning, I was fascinated by how clearly the author explains the concepts and main terms.
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Aleksandr Mikoff's blog</title><link>https://mikoff.github.io/posts/</link><description>Recent content in Posts on Aleksandr Mikoff's blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sun, 24 Mar 2024 21:00:00 +0300</lastBuildDate><atom:link href="https://mikoff.github.io/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>MCMC sampling</title><link>https://mikoff.github.io/posts/mcmc-sampling.md/</link><pubDate>Sun, 24 Mar 2024 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/mcmc-sampling.md/</guid><description>Quite often we want to sample from distributions that have computationally untractable CDF. To draw samples from them the numerical procedures are used. In the following note I would like to demonstrate few approaches for one particular example: having 2D robot perception and a map we want to sample most probable poses of this robot. This problem often emerges during initialization or re-initialization of the estimated robot pose when the filter diverged or needs to be initialized from scratch and we want to guarantee the fast convergence.</description></item><item><title>Probability density transform</title><link>https://mikoff.github.io/posts/probability-density-transform/</link><pubDate>Sat, 27 Jan 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/probability-density-transform/</guid><description>PDF transformations Link to heading While reading new [book]1 by Bishop I came across pdf transformation topic. It turned out to be counter-intuitive that we need not only transform the pdf by the selected function, but also multiply it by the derivative of the inverse function w.r.t. substituted variable. While delving into the details of this topic, I found the following sources to be quite useful: [2]2, [3]3, [4]4, [5]5.</description></item><item><title>Understanding deep learning: training, SGD, code samples</title><link>https://mikoff.github.io/posts/nn-training.md/</link><pubDate>Sun, 15 Oct 2023 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/nn-training.md/</guid><description>Recently, I have been reading a new [book]1 by S. Prince titled &amp;ldquo;Understanding Deep Learning.&amp;rdquo; While reading it, I made some notes and practiced with concepts that were described in great detail by the author. Having no prior experience in deep learning, I was fascinated by how clearly the author explains the concepts and main terms.
This post is:
a collection of keynotes from the first seven chapters of the book, that I have found useful for myself; the numpy-only implementation of deep neural network with variable layers size and training using SGD.</description></item><item><title>Likelihood and probability normalization, log-sum-exp trick</title><link>https://mikoff.github.io/posts/likelihood-and-log-sum-exp/</link><pubDate>Fri, 11 Aug 2023 23:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/likelihood-and-log-sum-exp/</guid><description>Working with probabilities involves multiplication and normalization of their values. Since the numerical values sometimes are extremely low that can lead to underflow problems. This problem is evident with particle filters - we have to multiply really low likelihood values that vanish in the end. Log-sum-exp allows to abbreviate this problem.
Approach Link to heading Log-likelihoods Link to heading Since the likelihood values can be extremely low it is more convenient to work with loglikelihood instead of likelihood: $$ \log(\mathcal{L}).</description></item><item><title>Optimization on manifold</title><link>https://mikoff.github.io/posts/optimization-on-manifold/</link><pubDate>Sun, 20 Nov 2022 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/optimization-on-manifold/</guid><description>Optimization on manifold Link to heading In the following post I would like to summarize my perception of poses&amp;rsquo; optimization problem. Such a problem often occurs in robotics and other related fields. Usually we want jointly optimize the poses, their increments and various measurements. What we want to find is such set of parameters, that minimize the sum of residuals, or differences, between the real measurements and measurements, that we derive from our state.</description></item><item><title>Bypassing censorship: tools and services</title><link>https://mikoff.github.io/posts/bypassing-censorship/</link><pubDate>Tue, 14 Jun 2022 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/bypassing-censorship/</guid><description>Bypassing the censorship in Russia Link to heading The level of censorship in Russia has been increasing over last few decades and was pushed to the new heights after the war has started. In the following post I would like to discuss the options we have to bypass the restrictions (of course you can just buy VPN subscription and be done with that, but we are the engineers, right?).</description></item><item><title>Notes on backpropagation</title><link>https://mikoff.github.io/posts/notes-on-backpropagation/</link><pubDate>Sat, 19 Feb 2022 23:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/notes-on-backpropagation/</guid><description>Notes on backpropagation Link to heading In optimization and machine learning applications the widely used tool for finding the model parameters is the gradient descent. It allows to find the maximum or minimum of the target function w.r.t. the parameters, in other words, to minimize the discrepancy between the model and the data.
Expand Down
Binary file added posts/mcmc-sampling.md/20240322220518.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added posts/mcmc-sampling.md/20240322220533.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added posts/mcmc-sampling.md/20240322233256.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added posts/mcmc-sampling.md/20240324204042.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 152d740

Please sign in to comment.