Matej Balog Matej Balog



Linear Algebra I

Chapters:

1 Matrix algebra

An $m \times n$ matrix $X$ is an array of real numbers \begin{equation} X = \begin{pmatrix} x_{11} & x_{12} & \cdots & x_{1n} \\ x_{21} & x_{22} & \cdots & x_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & \cdots & x_{mn} \end{pmatrix} = \left[ x_{ij} \right]_{m \times n} \end{equation} with $m, n \in \mathbb{N}, m, n \geq 1$ and $x_{ij} \in \mathbb{R}$. The $x_{ij}$ are called matrix entries.

Definition (matrix multiplication) Let $A = \left[ a_{ij} \right]_{m \times n}$ and $B = \left[ b_{ij} \right]_{n \times p}$. The product $AB$ is the $m \times p$ matrix with \begin{equation} \left[ AB \right]_{ij} = \sum_{k = 1}^n{a_{ik}b_{kj}} \end{equation}

Theorem Let $A, B, C$ be matrices and $\lambda$ a scalar. Then

  1. (Associativity) $A(BC) = (AB)C$
  2. (Distributivity) $A(B + C) = AB + BC$
  3. $AI_n = A = I_nA$
  4. $\lambda(AB) = (\lambda A)B = A(\lambda B)$
whenever the products are defined.

Example We can have $A^2 = 0$ with $A \not= 0$. For example \begin{equation} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} = 0_{2 \times 2} \end{equation}

Definition Suppose $A$ is an $n \times n$ matrix. Then $A$ is invertible if there is an $n \times n$ matrix $X$ such that $AX = I_n$ and $XA = I_n$.

Theorem If $A$ is invertible with $AX = XA = I_n$, then $X$ is unique. So we can write $X = A^{-1}.$

Proof Suppose also $A\tilde{X} = \tilde{X}A = I_n$. Then, however, $X = XI_n = X(A\tilde{X}) = (XA)\tilde{X} = I_n\tilde{X} = \tilde{X}$.

Lemma Assume $A, B$ are invertible $n \times n$ square matrices. Then $AB$ is invertible with inverse $B^{-1}A^{-1}$. So $(AB)^{-1} = B^{-1}A^{-1}$.

Proof \begin{eqnarray} (AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} =AI_nA^{-1} = (AI_n)A^{-1} = AA^{-1} = I_n \\ (B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}I_nB = B^{-1}(I_nB) = B^{-1}B = I_n \end{eqnarray} So by definition $B^{-1}A^{-1}$ is the inverse of $AB$.

Definition Define $E_{ij}$ as an $n \times m$ matrix which has a $1$ in position $i,j$ and $0$ elsewhere.

Theorem $E_{ij}E_{rs} = \begin{cases} E_{rs} & \text{if } j = r \\ 0 & \text{otherwise} \end{cases}$

Definition Let $A = \left[ a_{ij} \right]_{m \times n}$. The transpose $A^T$ of $A$ is the $n \times m$ matrix $A^T = \left[ \hat{a_{ij}} \right]$ where $\hat{a_{ij}} = a_{ji}$.

  • $A$ is symmetric if $A^T = A$
  • $A$ is skew-symmetric if $A^T = -A$
  • $A$ is ortoghonal if $A$ is invertible with inverse $A^{-1} = A^T$
Note that in all three cases $A$ is required to be square.

Theorem If $P$ is a permutation matrix, then $P$ is orthogonal (and so invertible).

Proof Let $P = \left[ p_{ij} \right]$ be a permutation matrix.
For all $i$, let $k(i)$ be such that $p_{ik} = 1$ if $k = k(i)$ and $p_{ik} = 0$ otherwise. Then \begin{equation} \left[ PP^T \right]_{ij} = \sum_{k = 1}^n{p_{ik}\hat{p_{kj}}} = \sum_{k = 1}^n{p_{ik}p_{jk}} = \cdots + 0 + p_{ik(i)} p_{jk(i)} + 0 + \cdots = 1 p_{jk(i)} = p_{jk(i)} \end{equation} Now if $i = j$, then $p_{jk(i)} = p_{ik(i)} = 1$. Otherwise $k(i) \not= k(j)$, because $P$ is a permutation matrix and so $p_{jk(i)} = 0$. Hence \begin{equation} \left[ PP^T \right]_{ij} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{otherwise} \end{cases} \end{equation} and $PP^T = I_n$. Then also $P^TP = I_n$, since $P^T$ is also a permutation matrix and $P$ is its transpose, so we can apply the same argument as before again.

Proposition Let $A, B$ be $m \times n$ matrices, $C$ an $n \times p$ matrix and $\lambda \in \mathbb{R}$. Then

  1. $(A^T)^T = A$
  2. $(A + B)^T = A^T + B^T$
  3. $(\lambda A)^T = \lambda A^T$
  4. $(BC)^T = C^T B^T$

2 Vector spaces

Theorem Let $V$ be a vector space over $\mathbb{R}$. For all $\lambda \in \mathbb{R}$ and $v \in V$

  1. $\lambda 0_V = 0_V$
  2. $0_\mathbb{R} = 0_V$
  3. $(-\lambda)v = -(\lambda v) = \lambda (-v)$, in particular $(-1)v = 1(-v) = -v$

Proof (1) \begin{equation} 0_V = \lambda 0_V + (-\lambda 0_V) = \lambda (0_V + 0_V) + (-\lambda 0_V) = (\lambda 0_V + \lambda 0_V) + (-\lambda 0_V) = \lambda 0_V + (\lambda 0_V + (-\lambda 0_V)) = \lambda 0_V + 0_V = \lambda 0_V \end{equation} (2) \begin{equation} 0_V = 0_\mathbb{R} v + (-0_\mathbb{R} v) = (0_\mathbb{R} + 0_\mathbb{R}) v + (-0_\mathbb{R} v) = 0_\mathbb{R} v + 0_\mathbb{R} v + (-0_\mathbb{R} v) = 0_\mathbb{R} \end{equation} (3) \begin{eqnarray} \lambda v + (-\lambda) v = (\lambda + (-\lambda)) v = 0_\mathbb{R} v = 0_V \\ \lambda v + \lambda (-v) = \lambda (v + (-v)) = \lambda 0_V = 0_V \end{eqnarray} Then both $(-\lambda) v = - (\lambda v)$ and $\lambda (-v) = - (\lambda v)$ by uniqueness of additive inverses.

Definition We call a subspace $W$ of a vector space $V$ proper if $W \not= \{0_V\}$ and $W \not= V$.

Definition A vector space $V$ is the direct sum of vector spaces $U$ and $W$ if $U + W = V$ and $U \cap W = \{0_V\}$. Then we write $U \oplus W = V$.

3 Linear Independence

4 Linear Equations, Elementary Row Operations

5 The Dimension of a Vector Space

6 Linear Transformations

7 The Matrix of a Linear Transformation