In previous presentations, we [AR, AvM] talked about stochastic differential equations (middle box in {numref}table-desc-levels
). Examples:
:::{margin} :class: sticky
Brownian motion
~
Drift-diffusion processes
~
:width: 500px
:name: table-desc-levels
(Risken, Table 1.1)
We also talked about the fact that the “white noise”
::::{margin}
:::{note}
All of these conventions are defined in terms of finite intervals.
:::
::::
Itô convention
~
Stratonovich convention
~
Anticipatory (or Hänggi-Klimontovich) convention
~
This ambiguity of convention arises because the infinitesimal limit of white noise is mathematically well-defined (within a given convention), but non physical.
A full solution to a Langevin equation typically takes the form of a probability density function (PDF), which, if the system is Markovian, can be written as
$$ \frac{\partial p(t, x)}{\partial t} = - \sum_i \frac{\partial}{\partial x_i} \left[D_i^{(1)}(x) , p(t,x)\right] + \sum_{i,j} \frac{\partial^2}{\partial x_i \partial x_j} \left[D_{ij}^{(2)}(x) , p(t, x) \right] $$ (eq-FPE-intro)
:name: fig-density-3-times
The kind of problem we want to solve: given an initial probability distribution, how does it evolve over time ?
Often, but not always, the initial condition is a Dirac δ.
(Risken, Fig. 2.2)
A stochastic process is a generalization of a random variable. Intuitively, we assign to each fig-density-3-times
). More precisely, to any countable set of times, the process associates a joint distribution. So if
-
$p(t_1, x_1)$ is the PDF of a random variable on$\mathbb{R}^N$ ; -
$p(t_2, x_2, t_1, x_1)$ is the PDF of a random variable on$\mathbb{R}^{2N}$ ; -
$p(t_3, x_3, t_2, x_2, t_1, x_1)$ is the PDF of a random variable on$\mathbb{R}^{3N}$ ; - etc.
One obtains a lower dimensional distribution by marginalising over certain time points:
:::{margin}
Markov process: