-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdefining-random-variables.Rmd
62 lines (57 loc) · 1.57 KB
/
defining-random-variables.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# Defining Random Variables
We write a $k$-vector (of scalars) as a row
$$
\v{x}=
\begin{bmatrix}
x_1 &
x_2 &
\ldots &
x_k
\end{bmatrix}.
$$
The transpose of $\v{x}$ as
$$
\v{x}^T=
\begin{bmatrix}
x_1 \\ x_2\\ \vdots \\ x_k
\end{bmatrix}.
$$
We use uppercase letters $X,Y,Z,\ldots$ to denote random variables. Random vectors are denoted by **bold** uppercase letters $\v{X},\v{Y},\v{Z},\ldots$, and written as a row vector. For example, $$
\v{X}=
\begin{bmatrix}
X_{[1]} &
X_{[2]} &
\ldots &
X_{[k]}
\end{bmatrix}.
$$
In order to distinguish random matrices from vectors, a random matrix is denoted by $\m{X}$.
The expectation of $\v{X}$ is defined as
$$
\E{\v{X}}=
\begin{bmatrix}
\E{X_{[1]}} &
\E{X_{[2]}} &
\ldots &
\E{X_{[k]}}
\end{bmatrix}.
$$
The $k\times k$ covariance matrix of $\v{X}$ is defined as
$$
\begin{aligned}
\V{\v{X}} &=\E{(\v{X}-\E{\v{X}})^T(\v{X}-\E{\v{X}})} \\
&=\begin{bmatrix}
\sigma_1^2 & \sigma_{12} & \ldots & \sigma_{1k} \\
\sigma_{21} & \sigma_{2}^2 & \ldots & \sigma_{2k} \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{k1} & \sigma_{k2}^2 & \ldots & \sigma_{k}^2 \\
\end{bmatrix}_{k\times k}
\end{aligned}
$$
where $\sigma_j=\V{X_{[j]}}$ and $\sigma_{ij}=\C{X_{[i]},X_{[j]}}$ for $i,j=1,2,\ldots,k$ and $i\neq j$.
:::{.theorem #explin name="Linearity of Exectation"}
Let $\m{A}_{l\times k},\m{B}_{m\times l}$ be fixed matrices and $\v{c}$ a fixed vector of size $l$. If $\v{X}$ and $\v{Y}$ are random vectors of size $k$ and $m$, respectively, such that $\E{X}<\infty,\E{Y}<\infty$, then
$$
\E{\m{A}\v{X}+\v{Y}\m{B}+\v{c}}=\m{A}\E{\v{X}}+\E{\v{Y}}\m{B}+\v{c}.
$$
:::