## Diagonalizable & Minimal Polynomial

A matrix or linear map is diagonalizable over the field $F$ if and only if its minimal polynomial is a product of distinct linear factors over $F$.

## Characteristic Polynomial

Let $A$ be an $n\times n$ matrix. The characteristic polynomial of $A$, denoted by $p_A(t)$, is the polynomial defined by $\displaystyle p_A(t)=\det(tI-A).$

## Cayley-Hamilton Theorem

Every square matrix over a commutative ring satisfies its own characteristic equation:

If $A$ is an $n\times n$ matrix, $p(A)=0$ where $p(\lambda)=\det(\lambda I_n-A)$.

## Superposition Principle

If $\mathbf{u}$ is a solution to $\bf{A}\mathbf{x}=\mathbf{b}$, and $\mathbf{v}$ is a solution to $\bf{A}\mathbf{x}=\mathbf{c}$, then $\mathbf{u}+\mathbf{v}$ is a solution to $\bf{A}\mathbf{x}=\mathbf{b}+\mathbf{c}$.

Proof:
$\mathbf{A}(\mathbf{u}+\mathbf{v})=\mathbf{A}\mathbf{u}+\bf{A}\mathbf{v}=\mathbf{b}+\mathbf{c}$

## Properties of Matrix Transpose

1) $(A^T)^T=A$
2) $(A+B)^T=A^T+B^T$
3) $(kA)^T=kA^T$
4) $(AB)^T=B^T A^T$

## Characteristic Polynomial, Eigenvalues, Eigenvectors

Characteristic Polynomial, $\det(\lambda I-A)$
\begin{aligned} \lambda\ \text{is an eigenvalue of }A&\iff\det(\lambda I-A)=0\\ &\iff \lambda\ \text{is a root of the characteristic polynomial}. \end{aligned}

Eigenspace
The solution space of $(\lambda I-A)\mathbf{x}=0$ is called the eigenspace of $A$ associated with the eigenvalue $\lambda$. The eigenspace is denoted by $E_\lambda$.

Sum/Product of Eigenvalues
– The sum of all eigenvalues of $A$ (including repeated eigenvalues) is the same as $Tr(A)$ (trace of $A$, i.e. the sum of diagonal elements of $A$)
– The product of all eigenvalues of $A$ (including repeated eigenvalues) is the same as $\det(A)$.

## Finding Least Squares Solution Review and Others

Rotation Matrix

The rotation matrix
$\displaystyle R=\begin{pmatrix} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{pmatrix}$
rotates points in the $xy$-plane counterclockwise through an angle $\theta$ about the origin.

For example rotating the vector $(1,0)$ 45 degrees counterclockwise gives us:
$\displaystyle \begin{pmatrix} \cos 45^\circ & -\sin 45^\circ\\ \sin 45^\circ & \cos 45^\circ \end{pmatrix} \begin{pmatrix} 1\\ 0 \end{pmatrix} = \begin{pmatrix} \frac{\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2} \end{pmatrix}.$

Finding Least Squares Solution

Given $Ax=b$ (inconsistent system), solve
$\displaystyle A^TAx=A^Tb$ instead to get a least squares solution of the original equation.

Projection

If we know a least squares solution $\mathbf{u}$ of $A\mathbf{x}=\mathbf{b}$, we can find the projection $\mathbf{p}$ of $\mathbf{b}$ onto the column space of $A$ by $\displaystyle \mathbf{p}=A\mathbf{u}.$

Dimension Theorem for Matrices (Also known as Rank-Nullity Theorem)

If $A$ is a matrix with $n$ columns, then $\displaystyle rank(A)+nullity(A)=n.$

($rank(A)$=number of pivot columns,

$nullity(A)$=number of non-pivot columns.)

Linear Independence and the Wronskian
A set of vector functions $\vec{f_1}(x), \dots, \vec{f_n}(x)$ from $\mathbb{R}$ to $\mathbb{R}^n$ is linearly independent in the interval $(\alpha,\beta)$ if $\displaystyle W[\vec{f_1}(x),\dots,\vec{f_n}(x)]\neq 0$ for at least one value of $x$ in the interval $(\alpha,\beta)$.

## Dot Product and Span Summary

Dot Product
$\mathbf{u}\cdot\mathbf{v}=\|u\|\|v\|\cos\theta$
$\cos\theta=\frac{\mathbf{u}\cdot\mathbf{v}}{\|u\|\|v\|}$

Span
$\text{span}\{\mathbf{u_1},\mathbf{u_2},\dots,\mathbf{u_k}\}=\{c_1\mathbf{u_1}+c_2\mathbf{u_2}+\dots+c_k\mathbf{u_k}\mid c_1,c_2,\dots,c_k\in\mathbb{R}\}=\text{set of all linear combinations of }$ $\{\mathbf{u_1},\mathbf{u_2},\dots,\mathbf{u_k}\}.$

Subspaces
$V\subseteq\mathbb{R}^n$ is a subspace of $\mathbb{R}^n$ if
1) $V=\text{span}\{\mathbf{u_1},\mathbf{u_2},\dots,\mathbf{u_k}\}$ for some vectors $\mathbf{u_1},\mathbf{u_2},\dots,\mathbf{u_k}$.
2) $V$ satisfies the closure properties:

(i) for all $\mathbf{u},\mathbf{v}\in V$, we must have $\mathbf{u}+\mathbf{v}\in V$.

(ii) for all $\mathbf{u}\in V$ and $c\in\mathbb{R}$, we must have $c\mathbf{u}\in V$.

3) $V$ is the solution set of a homogeneous system.

(Sufficient to check either one of Condition 1, 2, 3.)

Remark:
For $V$ to be a subspace, zero vector $\mathbf{0}$ must be in $V$. (Since for $\mathbf{u}\in V$, $0\in\mathbb{R}$, we have $0\mathbf{u}\in V$.)

Linear Independence and Dependence
$\mathbf{u_1},\mathbf{u_2},\dots,\mathbf{u_k}$ are linearly independent if the system $\displaystyle c_1\mathbf{u_1}+c_2\mathbf{u_2}+\dots+c_k\mathbf{u_k}=0$ has only the trivial solution, i.e. $c_1=c_2=\dots=c_k=0$.

If the system has non-trivial solutions, i.e. at least one $c_i$ not zero, then $\mathbf{u_1},\mathbf{u_2},\dots,\mathbf{u_k}$ are linearly dependent.

## Gaussian Elimination Summary

Row echelon form (REF)
For each non-zero row, the leading entry is to the right of the leading entry of the row above.

E.g. $\begin{pmatrix} 0 & \mathbf{1} & 7 & 2\\ 0 & 0 & \mathbf{9} & 3\\ 0 & 0 & 0 & 0 \end{pmatrix}$

Note that the leading entry 9 of the second row is to the right of the leading entry 1 of the first row.

Reduced row echelon form (RREF)
A row echelon form is said to be reduced, if in each of its pivot columns, the leading entry is 1 and all other entries are 0.

E.g. $\begin{pmatrix} 1 & 0 & 0 & 2\\ 0 & 1 & 0 & 3\\ 0 & 0 & 1 & 4 \end{pmatrix}$

Elementary Row Operations
1) $cR_i$ — multiply the $i$th row by the constant $c$
2) $R_i \leftrightarrow R_j$ — swap the $i$th and the $j$th row
3) $R_i+cR_j$ — add $c$ times of the $j$th row to the $i$th row.

Gaussian Elimination Summary
Gaussian Elimination is essentially using the elementary row operations (in any order) to make the matrix to row echelon form.

Gauss-Jordan Elimination
After reaching row echelon form, continue to use elementary row operations to make the matrix to reduced row echelon form.

## Vector Subspace Question (GRE 0568 Q3)

This is an interesting question on vector subspaces (a topic from linear algebra):

Question:
If V and W are 2-dimensional subspaces of $\mathbb{R}^4$, what are the possible dimensions of the subspace $V\cap W$?

(A) 1 only
(B) 2 only
(C) 0 and 1 only
(D) 0, 1, and 2 only
(E) 0, 1, 2, 3, and 4

To begin this question, we would need this theorem on the dimension of sum and intersection of subspaces (for finite dimensional subspaces):

$\dim (M+N)=\dim M+\dim N-\dim (M\cap N)$

Note that this looks familiar to the Inclusion-Exclusion principle, which is indeed used in the proof.

Hence, we have $\dim(M\cap N)=\dim M+\dim N-\dim (M+N)=4-\dim (M+N)$.

$\dim (M+N)$, the sum of the subspaces M and N, is at most 4, and at least 2.

Thus, $\dim (M\cap N)$ can take the values of 0, 1, or 2.

If you are looking for a lighthearted introduction on linear algebra, do check out Linear Algebra For Dummies. Like all “For Dummies” book, it is not overly abstract, rather it presents Linear Algebra in a fun way that is accessible to anyone with just a high school math background. Linear Algebra is highly useful, and it is the tool that Larry Page and Sergey Brin used to make Google, one of the most successful companies on the planet.

Thanks for reading our blog Mathtuition88.com.

We are also on:

If you are a subscriber, thanks for your support! We will continue our mission of blogging interesting articles on Mathematics!

Google’s signature ranking algorithm “PageRank” is heavily based on linear algebra! Read the above book to find out more!

An engaging introduction to vectors and matrices and the algorithms that operate on them, intended for the student who knows how to program. Mathematical concepts and computational problems are motivated by applications in computer science. The reader learns by doing, writing programs to implement the mathematical concepts and using them to carry out tasks and explore the applications. Examples include: error-correcting codes, transformations in graphics, face detection, encryption and secret-sharing, integer factoring, removing perspective from an image, PageRank (Google’s ranking algorithm), and cancer detection from cell features. A companion web site,

codingthematrix.com

provides data and support code. Most of the assignments can be auto-graded online. Over two hundred illustrations, including a selection of relevant xkcd comics.

Chapters: The Function, The Field, The Vector, The Vector Space, The Matrix, The Basis,Dimension, Gaussian Elimination, The Inner Product, Special Bases, The Singular Value Decomposition, The Eigenvector, The Linear Program

## 3×3 Matrix Inverse Shortcut (Fastest Method)

Source: http://www.math.columbia.edu/~bayer/LinearAlgebra/pdf/inverse.pdf

Backup copy (In case of broken link): 3×3 Matrix Inverse

This is an excellent method for finding the inverse of a 3×3 matrix, probably the fastest and easiest method for 3×3.

(Row reduction is better for 4×4 matrices and above.)

This method was first introduced to me by my student!

Featured book:

Linear Algebra and Its Applications, 4th Edition

## Linear Algebra Resources

1) https://www.math.okstate.edu/~binegar/3013-S99/3013-l16.pdf

(How to diagonalize matrices)

2) http://www.sosmath.com/matrix/expo/expo.html

(How to find exponential of matrices)

3)  http://en.wikipedia.org/wiki/Matrix_differential_equation

(Matrix Differential Equations)

4) http://www.ucl.ac.uk/CNDA/courses/coursework/JNF.pdf

(Jordan Normal Form)

5) http://www.math.niu.edu/~fletcher/jnf.pdf

(Jordan Normal Form for 2×2 Matrices)

6) http://www.math.colostate.edu/~gerhard/M345/CHP/ch9_6.pdf

(Matrix Exponential with repeated Eigenvalues)

7) http://www.math.vt.edu/people/afkhamis/class_home/notes/F08W12.pdf

Click to access matrixexp.pdf

(Matrix Differential equation with repeated Eigenvalues)

8) http://www.math.utah.edu/~gustafso/2250matrixexponential.pdf

(Matrix Exponential of Block Diagonal)

Featured book:

Linear Algebra and Its Applications, 4th Edition
Linear algebra is relatively easy for students during the early stages of the course, when the material is presented in a familiar, concrete setting. But when abstract concepts are introduced, students often hit a brick wall. Instructors seem to agree that certain concepts (such as linear independence, spanning, subspace, vector space, and linear transformations), are not easily understood, and require time to assimilate. Since they are fundamental to the study of linear algebra, students’ understanding of these concepts is vital to their mastery of the subject. David Lay introduces these concepts early in a familiar, concrete Rn setting, develops them gradually, and returns to them again and again throughout the text so that when discussed in the abstract, these concepts are more accessible.