Linear Algebra
Notes on linear algebra
Bases
A basis for a vector space is a set of linearly independent vectors such that . A vector space may have more than one basis, but all bases for a vector space have the same number of elements. Any vector of a vector space can be expressed as a linear combination of a basis of such vector space.
The basis of a vector can be explicitly denoted using subscript notation. For example if is expressed in terms of basis , then we can say .
More formally, given vector space , the set is a basis for if and if removing any element from stops making a basis for : . A basis for is maximal, which means that adding any element to makes it a linearly dependent vector set.
Orthogonal and Orthonormal Bases
Two vectors and are orthogonal if . Given a vector space , a set of vectors is an orthogonal basis for if each is orthogonal to all the other vectors in the basis: .
The unit vector of vector is . The set is an orthonormal basis for if every is the unit vector of a vector in an orthogonal basis for . A vecto can be expressed in terms of an orthonormal basis as .
The Gram–Schmidt process allows us to obtain an orthogonal basis given any basis . If we have an orthogonal basis, its trivial to obtain an orthonormal basis by calculating the unit vector of each vector in the orthogonal basis. The process is defined as so basically:
- etc
Matrix Bases
Given matrix :
Row Space: The non-zero rows of are a basis for
Column Space: The non-zero rows of are a basis for . Alternatively, find the columns in that contain a pivot. The corresponding columns from the original matrix are a basis for
Null Space: If contains pivots in all columns (i.e. the matrix vectors are linearly independent), then the basis for the null space of is just . Otherwise, the process is more elaborated:
We can setup the system of equations . If is an matrix, then contains elements.
For example, given , then the equation is .
The goal is to re-express so that the elements corresponding to columns in with pivots are expressed in terms of the elements corresponding to columns without pivots.
In our example, the columns of with pivots are the first and the third one, so we need to express and in terms of and . Expanding the equations gives us:
- so and therefore
- so and therefore
- which is not very useful in this case
So we now know that .
Finally, we can express as a linear combination over the terms corresponding to columns without pivots. The coefficients of such linear combination are a basis for .
In our example, , so the basis is .
Change of Basis
The identity transformation matrix changes the basis of a matrix to another basis of the same vector space. The identity transformation matrix that changes from basis to bases is . Notice an identity transformation matrix is not equal to the identity matrix , even though they re-use the same symbol.
Given and , then consists of all the dot products :
Notice that .
Linear Independence
A set of vectors is linearly independent if the only solution to the equation is for all .
We can also say a set where no is the zero vector is linearly independent if no vector from the set is in the span of the other vectors: .
Another way to express that the set of vectors in are linearly independent is that every vector in has a unique expression as a linear combination of vectors in .
The rows of a matrix are linearly independent if
Linear Combinations
A linear combination is an algebraic expression consisting on the sum of terms and constants of the form: where the set of are terms (with exponent 1) and the set are their corresponding constants. Linear combinations are degree-1 polynomials.
In a linear combination , the terms correspond to the basis in which we are expressing the linear combination.
Span
The span of a set of vectors is the set of all vectors that can be constructed as linear combinations of those vectors.
Consider that given , , and , then , as can be expressed as a linear combination of the other two.
A set of vectors is spanned by if any vector in can be expressed as a linear combination of the vectors in .
Vector Spaces
A vector space is a set that consists of a number of linearly independent vectors and all linear combinations of those vectors. Vector spaces must be closed under addition and scalar multiplication, which means that, given vector space :
Any sum of two vectors in the vector space is part of the vector space:
Any vector in the vector space multiplied by any constant is part of the vector space:
An abstract vector space is defined as where:
- is a set of vector-like objects, such as
- is a set of scalars, such as
- is a addition function
- is a scalar multiplication function
The addition function and the set must have the following properties:
- Associativity:
- Commutativity:
- Zero vector:
- Inverse:
The scalar function and the set must have the following properties:
- Distributivity: and
- Associativity:
- Unit vector:
Vector spaces define an inner product function that is:
- Symmetric:
- Linear: $, F , , V , + = , + , $
- Positive semidefinite: where
Defining an inner product automatically defines the length/norm operator and the distance operator . Both operations have the following characteristics given a valid inner product definition:
where
Triangle equality:
Cauchy–Schwarz inequality: where if and only if and are linearly dependent
where
Subspaces
A vector space is a vector subspace of if:
- is contained in :
- is a vector space (closed under addition and scalar multiplication)
Notice vector subspaces always contain the zero vector, as in order for a vector space to be closed under scalar multiplication, it must hold that for the scalar zero, and multiplication with the scalar zero always yields the zero vector.
One way to define a vector subspace is to constrain a larger vector space. Given , we can define a bi-dimensional vector subspace as . Another way is to define vector subspaces using . The bi-dimensional vector subspace of is also defined as .
Orthogonal Complement
Given vector space and a vector subspace , is the orthogonal complement of in vector space , defined as: .
Dimension
The dimension of vector space , denoted , is the cardinality (number of elements) in a basis of . Every possible basis of a vector space has the same dimension.
The following laws hold given an dimensional matrix :
- where
Zero Vector
The zero vector of a vector space is a vector such that .
Linear Transformations (or Map)
A linear transformation (also called linear map or linear function), is a function that maps vectors to vectors, and that preserves the following property, assuming function and linear combination :
Which in turn implies that:
- , which means that linear transformations preserve zero vectors
Given a linear transformation that maps an dimensional vector space to an dimensional vector space, if is a bijective function, then , and it means that is a one to one mapping between vector spaces.
Consider a linear transformation and . If we know that and that , then , which means we know how will behave for any linear combination of and . This is important as if we know how a linear transformation behaves for a basis of a vector space, then we know how it behaves for the whole vector space.
Kernel
The kernel of a linear transformation is the set of vectors from that map to the zero vector: . Notice that if , then is an injective function.
Image Space
The image space of a linear transformation , denoted is the range of , which is the set of vectors from that the function can produce. Notice that if , then is a surjective function.
Matrix Representation
If we have a linear transformation and a basis for and a basis for , then we can express as a matrix such that applying the transformation to a vector is equivalent to multiplying the matrix with the vector: given and , then . Notice the matrix is not the linear transformation, but a representation of the linear transformation with respect to certain bases.
Any linear transformation that maps an dimensional vector space to a dimensional vector space can be represented as an matrix.
Matrix “takes” a vector in basis and outputs a vector in basis , so its sometimes more explicitly denoted as , writing the input basis at the right, and the output basis at the left.
Correspondences between linear transformations and their matrix representations, given , , , , and :
In order to find the matrix representation with respect to a basis of a linear transformation, apply the linear transformation to all the vectors in the chosen basis and use the results as columns of the matrix representation.
For example, consider , basis , and a linear transformation . Then as , , and .
We can express a matrix transformation in terms of different bases by surrounding it with the corresponding identity transformation matrices. For example, given , then , which basically means that we transform the input matrix to the original’s transformation basis, apply the linear transformation, and then change to the new basis again.
Inverse
A linear transformation is invertible if its either injective, which implies , or surjective, which implies . It is also invertible if . If such linear transformation is invertible, then its matrix representation is invertible as well, and viceversa.
Given linear transformation and its matrix representation in terms of a certain basis, then corresponds to . Notice that given vector , if is invertible, then .
Affine Transformations
An affine transformation is a function that maps vector spaces, which is a combination of a linear transformation and a translation by a fixed vector : , or given the matrix representation , .
Systems of Linear Equations
Using RREFs
We can solve a system of linear equations given terms by constructing an matrix where the last column correspond to the constants at the right of the equal sign and computing its RREF. The last column of the RREF contains the solutions for each corresponding pivot term.
The system of equations has no solutions if the contructed matrix is not linearly independent, in which case its RREF contains zero coefficients with a potentially non-zero constant at the end.
For example, consider and . The resulting matrix is . Then, , so the solution set is and .
Using Inverses
We can solve a system of linear equations given terms by expressing it as a matrix equation of an square (otherwise there is no inverse) matrix containing the coefficients multiplied by an vector containing the terms, all equal to an vector containing the right-hand side constants. Using the inverse of , we can re-express as , which in turn equals as , and then compute to get the solution set.
Notice we multiply as and not as as matrix multiplication is not commutative and .
For example, consider and . The initial matrix equation is . The inverse of the coefficient matrix is so we can re-write our equation as , so then:
Using Determinants (Cramer’s Rule)
Given a system of linear equations with terms, consider an coefficient matrix and an term vector. The matrix is the matrix with the column corresponding to the term replaced by the term vector. If the coefficient matrix is and the term corresponds to the first column, then . The value of is then .
For example, consider and . The coefficient matrix is and the terms vector is . , so and .
Eigenvalues and Eigenvectors
The value is an eigenvalue of if there exists a vector (the corresponding eigenvector of ) such that multiplying by the vector is equal to scaling the vector by the eigenvalue: .
The list of eigenvalues of is denoted and consists of the list of such that . The list of eigenvalues may contain duplicates. A repeated eigenvalue is degenerate and its algebraic multiplicity corresponds to the number of times it appears on the list.
In order to find the eigenvectors of a matrix, calculate the eigenspace corresponding to each of the eigenvalues of the matrix.
Eigenspaces
Given matrix and eigenvalue , then is the eigenspace that corresponds to the eigenvalue . Eigenvectors that come from different eigenspaces are guaranteed to be linearly independent.
Every eigenspace contains at least one non-zero eigenvector that corresponds to the eigenvalue, and may contain more than one for degenerate eigenvalues. The amount of eigenvectors for a single eigenvalue is the geometric multiplicity. A matrix with a degenerate eigenvalue of algebraic multiplicity but eigenvectors for it has deficient geometric multiplicity.
The null space of a matrix is called the zero eigenspace as applying any vector from the null space to the matrix is equivalent to multiplication by zero: . Notice that the part of the expression corresponds to the eigenvalue equation where the eigenvalue is 0 and the vectors in the null space are the eigenvectors.
Characteristic Polynomial
The characteristic polynomial of a matrix is a single variable polynomial whose roots are the eigenvalues of and it is defined as . Therefore is an eigenvalue of if . If is an matrix, then its characteristic polynomial has degree .
Matrices
- Any vector is an eigenvector of the identity matrix corresponding to its eigenvalue
- All the eigenvalues of a positive semidefinite matrix are greater or equal than zero
- All the eigenvalues of a positive definite matrix are greater than zero
- A matrix is invertible if there is a such that
- An matrix is diagonalizable if it has eigenvalues (which means it has at least eigenvectors)
- Given a normal matrix , is an eigenvector of iff is an eigenvector of
Notice that matrix determinants and traces are operations strictly defined on the eigenvalues of a matrix, as and :
The statements and are equivalent. We know that , so implies that none of the eigenvalues are zero, otherwise the product would be cancelled out. Because none of the eigenvalues are zero, then the only solution to is , so .
Eigenbases
The diagonalizable version of a matrix , which consists of the eigenvalues of in the diagonal, corresponds to expressed in its eigenbasis (the natural basis).
A matrix from whose columns are the eigenvectors of is a change-of-basis operation from the eigenbasis of . Therefore is a change-of-basis operation to the eigenbasis of . Notice that may contain the eigenvectors in any order and with any scaling factor.
Eigendecomposition
Given a linear transformation matrix representation , the eigendecomposition of expresses the transformation in the eigenbasis of using change-of-basis operations.
We can express any transformation as where is a change-of-basis matrix containing the eigenvectors of as columns and is expressed on its eigenbasis (containing the eigenvalues in the diagonal).
Every normal matrix has a corresponding orthogonal matrix such that its eigendecomposition is , as for orthogonal matrices .
Applying the transformation on is equivalent to saying which first changes the basis of to the eigenbasis of , applies the transformation, and changes the basis back again.
For example, consider . Its eigenvalues are and we can find out the corresponding eigenvectors as follows:
So we know that the eigenvector for 5 is and
So the eigenvector for 10 is . Therefore we can say that .