-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.xml
18 lines (18 loc) · 15.1 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Aleksandr Mikoff's blog</title><link>https://mikoff.github.io/</link><description>Recent content on Aleksandr Mikoff's blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 10 May 2024 20:00:00 +0300</lastBuildDate><atom:link href="https://mikoff.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>Bayesian linear regression</title><link>https://mikoff.github.io/posts/bayesian-linear-regression/</link><pubDate>Fri, 10 May 2024 20:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/bayesian-linear-regression/</guid><description>Bayesian regression Link to heading Let&rsquo;s consider a case when we know the conditional distribution $p(\mathbf{y}_k|\boldsymbol{\theta})$, but parameter $\boldsymbol{\theta} \in \mathbb{R}^d$ is unknown.
The classical statistical method for estimating the parameter is the maximum likelihood method (MLE), where we maximize the joint probability of the measurements, which is also called the likelihood function: $$ \mathcal{L}(\boldsymbol{\theta}) = \prod_{k=1}^T p(\mathbf{y}_k\mid\boldsymbol{\theta}). $$ The maximum of the likelihood function with respect to $\boldsymbol{\theta}$ gives the MLE estimate: $$ \hat{\boldsymbol{\theta}} = \operatorname*{argmax}_{\boldsymbol{\theta}}\mathcal{L}(\boldsymbol{\theta}).</description></item><item><title>MCMC sampling</title><link>https://mikoff.github.io/posts/mcmc-sampling.md/</link><pubDate>Sun, 24 Mar 2024 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/mcmc-sampling.md/</guid><description>Quite often we want to sample from distributions that have computationally untractable CDF. To draw samples from them the numerical procedures are used. In the following note I would like to demonstrate few approaches for one particular example: having 2D robot perception and a map we want to sample most probable poses of this robot. This problem often emerges during initialization or re-initialization of the estimated robot pose when the filter diverged or needs to be initialized from scratch and we want to guarantee the fast convergence.</description></item><item><title>Notes on Computer vision</title><link>https://mikoff.github.io/notes/computer-vision-notes/</link><pubDate>Sun, 24 Mar 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/notes/computer-vision-notes/</guid><description>Notes on Computer vision Link to heading I have made these notes while reading Computer vision: Models, Learning, and Inference book.
Coordinate systems notation Link to heading $\mathbf{R}_{wc}$ is a rotation matrix such that, after its application to the camera axes, they become collinear with the world axes. Some describe it as a matrix that rotates a vector from the camera coordinate system to the world coordinate system. However, this description can be slightly misleading, as vectors exist in space and are not physically rotated.</description></item><item><title>Probability density transform</title><link>https://mikoff.github.io/posts/probability-density-transform/</link><pubDate>Sat, 27 Jan 2024 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/probability-density-transform/</guid><description>PDF transformations Link to heading While reading new [book]1 by Bishop I came across pdf transformation topic. It turned out to be counter-intuitive that we need not only transform the pdf by the selected function, but also multiply it by the derivative of the inverse function w.r.t. substituted variable. While delving into the details of this topic, I found the following sources to be quite useful: [2]2, [3]3, [4]4, [5]5.</description></item><item><title>Understanding deep learning: training, SGD, code samples</title><link>https://mikoff.github.io/posts/nn-training.md/</link><pubDate>Sun, 15 Oct 2023 12:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/nn-training.md/</guid><description>Recently, I have been reading a new [book]1 by S. Prince titled &ldquo;Understanding Deep Learning.&rdquo; While reading it, I made some notes and practiced with concepts that were described in great detail by the author. Having no prior experience in deep learning, I was fascinated by how clearly the author explains the concepts and main terms.
This post is:
a collection of keynotes from the first seven chapters of the book, that I have found useful for myself; the numpy-only implementation of deep neural network with variable layers size and training using SGD.</description></item><item><title>Likelihood and probability normalization, log-sum-exp trick</title><link>https://mikoff.github.io/posts/likelihood-and-log-sum-exp/</link><pubDate>Fri, 11 Aug 2023 23:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/likelihood-and-log-sum-exp/</guid><description>Working with probabilities involves multiplication and normalization of their values. Since the numerical values sometimes are extremely low that can lead to underflow problems. This problem is evident with particle filters - we have to multiply really low likelihood values that vanish in the end. Log-sum-exp allows to abbreviate this problem.
Approach Link to heading Log-likelihoods Link to heading Since the likelihood values can be extremely low it is more convenient to work with loglikelihood instead of likelihood: $$ \log(\mathcal{L}).</description></item><item><title>Optimization on manifold</title><link>https://mikoff.github.io/posts/optimization-on-manifold/</link><pubDate>Sun, 20 Nov 2022 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/optimization-on-manifold/</guid><description>Optimization on manifold Link to heading In the following post I would like to summarize my perception of poses&rsquo; optimization problem. Such a problem often occurs in robotics and other related fields. Usually we want jointly optimize the poses, their increments and various measurements. What we want to find is such set of parameters, that minimize the sum of residuals, or differences, between the real measurements and measurements, that we derive from our state.</description></item><item><title>Bypassing censorship: tools and services</title><link>https://mikoff.github.io/posts/bypassing-censorship/</link><pubDate>Tue, 14 Jun 2022 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/bypassing-censorship/</guid><description>Bypassing the censorship in Russia Link to heading The level of censorship in Russia has been increasing over last few decades and was pushed to the new heights after the war has started. In the following post I would like to discuss the options we have to bypass the restrictions (of course you can just buy VPN subscription and be done with that, but we are the engineers, right?).</description></item><item><title>Notes on backpropagation</title><link>https://mikoff.github.io/posts/notes-on-backpropagation/</link><pubDate>Sat, 19 Feb 2022 23:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/notes-on-backpropagation/</guid><description>Notes on backpropagation Link to heading In optimization and machine learning applications the widely used tool for finding the model parameters is the gradient descent. It allows to find the maximum or minimum of the target function w.r.t. the parameters, in other words, to minimize the discrepancy between the model and the data.
However, to use this method the gradient has to be computed. The first problem is that if our function has a complex form the process of differentiating is quite tricky.</description></item><item><title>Linear and logistic regressions</title><link>https://mikoff.github.io/posts/linear-and-logistic-regression.md/</link><pubDate>Thu, 10 Feb 2022 19:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/linear-and-logistic-regression.md/</guid><description>Let&rsquo;s assume we want to predict whether the person is male or female judging by its height.
Spoiler heights = np.array([120, 135, 145, 150, 151, 155, 165, 170, 172, 175, 180, 190]) labels = np.array([0, 0, 0, 0., 0, 0, 1, 1, 1, 1, 1, 1.]) females = labels == 0 males = labels == 1 fig, ax = plt.subplots(1, 1, figsize = (6, 2)) ax.scatter(heights[females], labels[females]) ax.scatter(heights[males], labels[males]) ax.set(xlabel = &#39;height, cm&#39;, ylabel = &#39;class&#39;) ax.</description></item><item><title>Uncertainty propagation with and without Lie groups</title><link>https://mikoff.github.io/posts/uncertainty-propagation.md/</link><pubDate>Thu, 04 Nov 2021 23:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/uncertainty-propagation.md/</guid><description>Uncertainty propagation with and without Lie groups and algebras Link to heading The correct uncertainty estimation of the pose is crucial for any navigation or positioning algorithm performance. One of the most natural way of representing the uncertainty for me is the confidence ellipse.
In the following post I would like to show effect of the uncertainty propagation using the various algorithms. What&rsquo;s more important, I would like to show the Gaussians representations both in cartesian and exponential coordinates.</description></item><item><title>Point cloud alignment using Lie algebra machinery</title><link>https://mikoff.github.io/posts/point-cloud-alignment-and-lie-algebra.md/</link><pubDate>Mon, 27 Jul 2020 19:30:00 +0300</pubDate><guid>https://mikoff.github.io/posts/point-cloud-alignment-and-lie-algebra.md/</guid><description>Point cloud alignment using Lie algebra machinery Link to heading Special Orthogonal group and vectorspaces Link to heading Today I would like to cover the importance of Lie groups to the problems, that often arises in robotics field. The pose of the robot can be described through rotation and translation. Rotations, however, do not belong to the vector space: we are not allowed to sum the rotations or multiply them by a scalar, because the resulting element will not belong to SO(3) group.</description></item><item><title>Point cloud alignment and SVD</title><link>https://mikoff.github.io/posts/point-cloud-alignment.md/</link><pubDate>Wed, 24 Jun 2020 18:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/point-cloud-alignment.md/</guid><description>Point cloud alignment and SVD Link to heading Singular value decomposition Link to heading Recently I studied the problem of finding the rotation and translation between two point sets and decided to write the post about it. The key here is singular value decomposition, or SVD.
It is extremely popular technique in many types of linear problems. It should be not surprised, that the point cloud alignment problem can be solved with its help.</description></item><item><title>Nonlinear estimation: Full Bayesian, MLE and MAP</title><link>https://mikoff.github.io/posts/nonlinear-estimation-mle-map.md/</link><pubDate>Sat, 18 Apr 2020 10:51:21 +0300</pubDate><guid>https://mikoff.github.io/posts/nonlinear-estimation-mle-map.md/</guid><description>Intro Link to heading Recently I have read &ldquo;State Estimation for Robotics&rdquo; book and came across a good example on one-dimensional nonlinear estimation problem: the estimation of the position of a landmark from stereo-camera data.
Distance from stereo-images Link to heading The camera image is a projection of the world on the image plane. The depth perceptions arises from disparity of 3d point (landmark) on two images, obtained from left and right cameras.</description></item><item><title>EKF SLAM</title><link>https://mikoff.github.io/posts/ekf-slam.md/</link><pubDate>Thu, 09 Apr 2020 21:00:00 +0300</pubDate><guid>https://mikoff.github.io/posts/ekf-slam.md/</guid><description>Introduction Link to heading One of the most fundamental problems in robotics is the simultaneous localization and mapping (SLAM) problem.
It is more difficult than localization in that the map is unknown and has to be estimated along the way. It is more difficult than mapping with known poses, since the poses are unknown and have to be estimated along the way.
&ndash; S. Thrun
In the following post I would like to discuss the EKF SLAM and highlight the important aspects of its implementation and convergence.</description></item><item><title>Particle Filter: localizing the robot</title><link>https://mikoff.github.io/posts/particle-filter.md/</link><pubDate>Tue, 31 Mar 2020 19:59:26 +0300</pubDate><guid>https://mikoff.github.io/posts/particle-filter.md/</guid><description>Particle filter Link to heading In this post I would like to show the basic implementation of the Particle filter for robot localization using distance measurements to the known anchors, or landmarks. So why particle filter is so widely used? It&rsquo;s widespread application lies in its versatile nature and universalism. The filter is able to:
Work with nonlinearities. Handle non-gaussian distributions. Easily fuse various information sources. Simulate the processes. My sample implementation takes less then 100 lines of Python code and can be found here.</description></item><item><title>Inverse transform sampling</title><link>https://mikoff.github.io/posts/inverse-transform-sampling.md/</link><pubDate>Sun, 09 Feb 2020 14:56:13 +0300</pubDate><guid>https://mikoff.github.io/posts/inverse-transform-sampling.md/</guid><description>Probability density and cumulative distribution functions Link to heading Probability density function $f(x)$ is a function, which allows us to evaluate the probability that the sample, drawn from the distribution, will be equal to the value $X$. Also we can use PDF to calculate the probability that the randomly drawn sample from distribution will be in certain range, for example, $a \leq X \leq b$. This probability equals to the area under the PDF curve on the given interval and can be calculated by integration: $$ P(a \leq X \leq b) = \int _a^b f(x) dx $$ Cumulative distribution function shows us the probability (portion of data, frequence) to draw a number $X$ less or equal than $x$: $$ P(X \leq x) = F(x).</description></item><item><title>About</title><link>https://mikoff.github.io/about/</link><pubDate>Sun, 09 Feb 2020 13:40:59 +0300</pubDate><guid>https://mikoff.github.io/about/</guid><description>I am researcher and algorithm developer in the indoor and outdoor navigation field. I am interested in Statistics, Robotics, Positioning and Automotive fields. This blog is about:
explorations with algorithms, experiments with data, understanding the math concepts and computer science algorithms in simple words. You can find my CV here</description></item><item><title>Documenting the experience</title><link>https://mikoff.github.io/posts/documenting-the-experience/</link><pubDate>Sun, 09 Feb 2020 12:53:51 +0300</pubDate><guid>https://mikoff.github.io/posts/documenting-the-experience/</guid><description>Hi there! For every engineer and developer the continious self-studying is a must. Everyday we read, try to solve problems and clarify the concepts, write the code and visualise the things, which help us to get an insights.
I am interested in Navigation and Positioning, Statistics, Autonomous vehicles and Robotics. These topics are huge, and what is more important, they are about future of humanity.
To summarize all the things, which I am doing and share the ideas I decided to post about it.</description></item></channel></rss>