Learn Linear Algebra (Get Started Here)

by Dave
(DAVE)—

Are you looking to advance in mathematics and maybe even scientific disciplines? Like calculus, linear algebra is crucial to scientists and engineers, playing a pivotal role in many modern advancements. In linear algebra we use both theory and examples to help you learn the power of this valuable tool.

Introduction to Linear Algebra

Linear algebra is commonly used in statistics, engineering, computer science, economics and of course mathematics. But what is linear algebra anyway? When I think of linear algebra the first thing that comes to my mind is the use of matrices and vectors. Technically however, linear algebra is the study of linear functions and vectors.

Even broken down this way it still sounds like quite a bit. Next, let’s look more closely at how vectors, linear functions and matrices are related. Vectors are distances that have direction in the coordinate plane. Similarly, linear functions are functions that represent straight lines in the coordinate plane. It follows that we write linear functions as vector additions. These vector additions are easily organized with a matrix.

You might be wondering, why bother using vectors and matrices with linear functions? Aren’t linear functions already as easy as it gets function wise? However, the fact that linear functions are so nice to work with is why this branch of mathematics exits in the first place. Often times, you want to begin by modeling a situation with a linear model. However, even a system of linear functions can become complicated quickly. As a result, you will need a good way to organize your functions and perform operations on them. That’s why it can be useful to think of the functions as a sum of smaller vector parts. Then consequently use matrices to organize them. Linear algebra will help you develop the skills to do this.

Linear Algebra Starts with Linear Equations

Now that you have an idea of what linear algebra is, let’s look at how linear equations, matrices and vectors work together to solve systems.

When you put your systems of linear equations into a matrix you can change how they look and make them easier to solve. You do this by using row operations. Then the process you use to help eliminate variables to find solutions is called Gaussian elimination.

After you have learned how to solve matrices using Gaussian elimination the next step is to start thinking about the connection between matrices and vectors. Many of the operations on matrices have similarities with the types of operations you can do with vectors.

Remember when you solved linear equations in an algebra course? You needed to have at least the same number of equations as you had variables to solve for. If you didn’t have enough unique equations, you couldn’t find all of the solutions. Sometimes you could have equations that were really the same just written in a different way. These equations, in a matrix, determine the number of columns and rows. This leads us to an important definition known as the rank of a matrix. The rank of a matrix is the number of unique equations that aren’t combinations of another.

Linear Transformations

To a large part, linear algebra, is the study of linear transformations. A linear transformation must meet two criteria. First it is any transformation that maps members of one vector space to another vector space. Second it also preserves the laws of scalar multiplication and vector addition. Furthermore, you can look at the definition of a linear transformation as a matrix representation as well. Therefore both vector spaces and matrices are equivalent ways of looking at linear transformations.

Interestingly, many of the transformations that you looked at in geometry are considered linear transformations. You can check to see that a transformation is linear if the origin stays in the same place and all the lines in the first vector space remain lines after the transformation. Do you recall learning in geometry about dilations, contractions, orthogonal projections, reflections, rotations and vertical and horizontal shears? If so you might be happy to know that they are all examples of linear transformations. These transformations can be done with matrices as well.

It’s nice to have these new tools to work with and ways of thinking but we are still missing a key piece. Remember when you used addition, subtraction, multiplication and division to rewrite linear systems by using inverse operations? It would be nice to have this same tool to help solve a matrix and we do. If an inverse exits for a particular matrix then the matrix is said to be invertable. Unfortunately however, not every matrix has an inverse. As you will see in the lesson only square matrices have the possibility of having an inverse and not even all of them do.

Vector Spaces and Subspace

Let’s look a bit more closely at the properties of vector spaces. A vector space is a set of vectors that is closed under vector addition and scalar multiplication.  To clarify, this means for the set to be a vector space the rules of commutativity, associativity (of vector addition), additive identity, and additive inverse must exist. In addition the properties of associativity of scalar multiplication, distributivity of scalar sums, distributivity of vector sums also need to hold true.

Subspaces are vector spaces that are created from parts of larger vector spaces. In order to be a subspace there are three properties that must be true. The zero vector of the original vector space must be in the subspace. In addition, the subspace is closed under vector addition and scalar multiplications. Furthermore, in linear algebra, when we are working in a vector space we will also need to establish a coordinate system to use so that we have an idea of what the space looks like geometrically.

Finite and infinite dimensional vector space

You create a linear combination when you add two or more vectors together. In addition, the vectors being added can have a scalar multiple in front of them. Additionally, if you could find all of the possible linear combinations from a list of vectors that is called its span. The span must contain the smallest subspace of all of the possible linear combinations.

A vector space is finite-dimensional if you could write the list of vectors that spans the space. If you can’t write the list of vectors then you call it an infinite-dimensional vector space.

Kernals, Transformations and Linear Maps

Remember that a linear map is a transformation that preserves both vector addition and scalar multiplication. Thus, linear maps are also called linear transformations. Furthermore, in some situations they can also be called functions. The linear map creates an image of the original matrix. The kernel is a type of linear map.

The kernel of a matrix is the null space of the matrix. This is because it contains all of the elements that will map the vector space to zero when you perform the matrix multiplication of multiplying the kernel by the original matrix. Remember that when you get a linear space like this it is actually the set of solutions. Therefore the kernel is the answer to the question “what times the original matrix results in zero?”. So the kernel is often the solution set to a system of equations.

Now let’s look at the rank-nullity theorem. First recall that the rank of a matrix is given by the column space. Also the nullity of the matrix is the dimension of its null space. When you put the matrix in reduced row echelon form then the rank of the matrix is equal to the rank of the matrix in row echelon form. This is also true for the nullity. So the rank of the matrix plus the nullity of the matrix is equal to the number of rows.

Linear Independence

When you have a set of vectors, that set can either be classified as linearly dependent or linearly independent. A set of vectors is linearly dependent if at least one of the vectors in the set is a linear combination of the other vectors. Of course it follows that if a vector space is not linearly dependent it is linearly independent.

Orthogonality in Linear Algebra

Two vectors are perpendicular to one another if their dot product is equal to zero. In addition, if you are talking about vectors, then instead of the word perpendicular we usually say orthogonal. It is also equally important you remember that a vector that is one unit in length is called a unit vector. In fact, we can have vectors that are both a unit vector and orthogonal. We call these types of vectors orthonormal. It follows that, an orthonormal set, is a set made of vectors that are orthonormal.

Similarly, recall that a basis is a set of vectors in a vector space. Of course then, we call a basis made only of orthonormal vectors an orthonormal basis. Furthermore, remember that we can build a matrix out of orthonormal vectors. This kind of matrix is called an orthogonal matrix.

Sometimes in linear algebra, you want to take a basis that is not orthonormal and change it to one that is. You can do this by using something called QR factorization. You might be wondering why bother changing a matrix to an orthonormal one? In fact, rewriting the basis as an orthonormal one can make the matrix easier to manipulate and perform calculations on. This process is called QR factorization. Furthermore, the Gram-Schmidt Process is one of the most common ways to perform a QR factorization. To read more about orthogonal vectors and their special properties click here: Inner Products and Orthonormal Bases.

  1. Orthonormal Bases and Orthogonal Projections
  2. Orthogonal Matrix and Orthogonal Projection Matrix
  3. Gram-Schmidt Process and QR Factorization
  4. Inner Products and Orthonormal Bases

Determinants

If you have a square matrix then you can find the determinant. Interestingly in linear algebra, the determinant of a matrix is just one number either positive or negative. It is only one number because it is a measure of the volume of the rows of a matrix. It is important because without a determinant you can’t find the inverse of a matrix. This is why only square matrices are invertible. The determinant also shows up in other matrix calculations such as eigenvalues and eigenvectors.

It is easiest to find the determinant for a two by two matrix by using the Laplace expansion method. Unfortunately, once you have larger matrices this method can become tedious. This results from the process of having to break down the matrix into many parts. In addition to the definition, you can also use Gauss-Jordan elimination to find the determinant.

Eigenvalues and Eigenvectors in Linear Algebra

When you multiply a matrix by a scalar the resulting matrix is called an eigenvector. An eigenvalue is the scalar that you use to multiply to create the eigenvector. This linear transformation will be in the same direction as the eigenvalue. You can determine if a particular scalar is an eigenvalue of a matrix. Check to see if the determinate of the matrix minus the scalar times the identity matrix is equal to zero.

A diagonal matrix only has non-zero entries along the main diagonal. Therefor if a matrix is diagonalizable you can turn it into a diagonal matrix with a series of transformations. These are related to eigenvectors because a square matrix is diagonalizable if and only if it has the same number of independent eigenvectors as its row or column number.

Jordan Blocks

An operator in linear algebra is a map of how a vector space can move to produce other parts of the same vector space. A linear operator is an operator that follows the rules of a linear transformation, so it preserves addition and scalar multiplication. We often want to consider the types of operator that maps one subspace onto itself. It is so important that it has the name invariant subspace.

A Jordan diagonal matrix, also called a Jordan basis has zeros everywhere except for the main diagonal and the diagonal above it. The main diagonal can be any number and the diagonal above the main diagonal can consist of either zeros or ones.

Making the Complex Simple

Linear algebra is typically defined as the branch of mathematics dealing with vector spaces, linear equations, and linear transformations. These items are important because linear equations are very easy to solve (once you know how to do it!). Thus, they give efficient approximations for more complex equations. This holds true even for whole systems of equations in more dimensions than you can really picture in your mind, making it vital for some of the most important fields today such as quantum mechanics. And even something as everyday as compressing an image into a .jpeg file format; yup, that’s linear algebra. This Introduction to Linear Algebra book will help you master the basics in the field so you can continue on to great things.

What You’ll Learn in Linear Equations

  • Introduction to Linear Systems
  • Geometric Interpretation
  • A System with Infinitely Many Solutions
  • A System without Solutions
  • Matrices
  • Vectors
  • Gauss–Jordan Elimination
  • The Number of Solutions of a Linear System
  • Matrix Algebra

We begin this course with an introduction to linear systems of equations. We start with 2 x 2 systems in two dimensions and 3 x 3 systems in three dimensions. The idea is to get across how many solutions a linear system can have by considering the linear system geometrically. Then we analyze n x m linear systems of equations.

Next, we discuss the matrix form of a linear system and how the solution set to a linear system is written in terms of vectors. We then consider vectors, matrices, matrix operations, and various properties of vectors and matrices.

We examine in great detail techniques on solving linear systems of equations. We also talk about the row-echelon form of a matrix and using Gaussian elimination and Gaussian-Jordan elimination to obtain a row-echelon form and the reduced row-echelon form, respectively.

We also discuss homogeneous systems. Towards the end of the course, we completely characterize a linear system of equations solutions in terms of the system’s rank. We called this the Fundamental Theorem of Linear Systems.

What You’ll Learn in Linear Transformations

  • Introduction to Linear Transformations
  • Definition of Linear Transformation
  • Characterization of Linear Transformation
  • Scalings
  • Orthogonal Projections
  • Reflections
  • Orthogonal Projections and Reflections in Space
  • Rotations
  • Rotations Combined with a Scaling
  • Matrix Multiplication
  • Matrix Algebra
  • Block Matrices
  • Powers of Transition Matrices
  • Invertible Functions
  • Invertible Matrices
  • A Criterion for Invertibility
  • The Inverse of a Block Matrix
  • Definition and Examples of Linear Maps
  • Linear Maps and Basis of Domain
  • Algebraic Operations on L(V,W)
  • Algebraic Properties of Products of Linear Maps
  • Null Space and Injectivity
  • Range and Surjectivity
  • Fundamental Theorem of Linear Maps
  • Homogeneous System of Linear Equations
  • Inhomogeneous System of Linear Equations
  • Representing a Linear Map by a Matrix
  • Matrices
  • The Matrix of the Sum of Linear Maps
  • The Matrix of a Scalar Times a Linear Map
  • Matrix Multiplication
  • The Matrix of the Product of Linear Maps
  • Linear Combination of Columns
  • Invertible Linear Maps
  • Isomorphic Vector Spaces
  • Linear Maps Thought of as Matrix Multiplication
  • Operators
  • Products of Vector Spaces
  • Products and Direct Sums
  • Quotients of Vector Spaces
  • The Dual Space and the Dual Map
  • The Null Space and Range of the Dual of a Linear Map
  • The Matrix of the Dual of a Linear Map
  • The Rank of a Matrix

We begin this course with a thorough explanation of what a (real) vector space is. We provide proof of several elementary properties of vector spaces illustrating a vector space’s axioms and how to use them.

Next, we discuss what a linear transformation is by using a matrix definition and using the standard vectors of real vector spaces. Then, we work through several examples trying to understand if a given transformation is a linear transformation. After that, we characterize when a linear transformation has the properties of linearity.

In the next lesson, we study linear transformations on the Cartesian plane that are commonly used in plane geometry. In particular, we learn scalings, orthogonal projections, reflections, rotations, and shears. With each type of transformation, we motivate, prove, and then illustrate with several examples.

Next, we study what an invertible transformation is. We begin by understanding that an invertible transformation (if it exists) is an invertible function of a linear transformation (as a function). Several examples are demonstrated by applying elementary row operations. After that, a characterization of when a square matrix is invertible is given in terms of 1) reduced row-echelon form, 2) the rank of the corresponding system, and 3) invertibility of the matrix. Finally, we relate back to when a matrix is invertible with linear systems, as discussed in the previous course.

What You’ll Learn in Subspaces and Their Dimension

  • The Image of a Linear Transformation
  • The Kernel of a Linear Transformation
  • Characterizations of Invertible Matrices
  • Subspaces
  • Bases and Linear Independence
  • Characterizations of Linear Independence
  • The Dimension of a Subspace
  • Number of Vectors in a Basis
  • Finding Bases of Kernel and Image
  • Rank-Nullity Theorem
  • Introduction to Coordinates
  • Linearity of Coordinates
  • The Matrix of a Linear Transformation
  • Similar Matrices

This course is a continuation of Linear Transformations. So we begin with a thorough introduction to the image and kernel of a linear transformation. Then we discuss spanning sets and properties of the image of a linear transformation. Properties of the kernel are also detailed, and importantly, various characterization of invertible matrices are given.

By considering the properties of the image and kernel of a linear transformation, we motivate the definition of a subspace of a vector space. After that, we prove that the kernel and image are subspaces, and then we provide several examples of subspaces.

By working through examples, the idea of redundant vectors comes into focus. So we thoroughly examine the concepts of linear independence and bases. Then we cover linear relations and provide several characterizations of linear independence.

Finally, in this course, we study the dimension of a subspace. We prove that every basis for a subspace has the same number of vectors (called the dimension of the subspace). Many examples are given, and we demonstrate in great detail finding a basis for the kernel and image of a linear transformation.

At the end of the course, there is a spectacular theorem, The Fundamental Theorem of Linear Algebra. This theorem gives an impressive relationship between the kernel dimension (the number of free variables), the total number of variables, and the number of leading variables. We explain how and why this is so significant and provide additional characterizations of invertible matrices.

What You’ll Learn in Linear Spaces

  • Linear spaces
  • Subspaces
  • Span, linear independence, basis, coordinates
  • Finite dimensional linear spaces
  • Linear Transformations, Image, Kernel, Rank, Nullity
  • Definitions and Examples of Span and Linear Independence
  • Span Is the Smallest Containing Subspace
  • More on Linear Independence
  • Linear Dependence Lemma
  • Finite-Dimensional Subspaces
  • Introduction to Bases
  • Criterion for Basis
  • Spanning List Contains a Basis
  • Basis of Finite-Dimensional Vector Space
  • Linearly Independent List Extends to a Basis
  • Introduction to Dimension
  • Basis Length
  • Dimension of a Subspace
  • Linearly Independent List of the Right Length Is a Basis
  • Spanning List of the Right Length Is a Basis
  • Dimension of a Sum
  • Isomorphisms and Isomorphic Spaces
  • Properties of Isomorphisms
  • The B-matrix of a Linear Transformation
  • Change of Basis

In the next lesson, we study linear transformations with a special emphasis on isomorphisms. In particular, we detail the image, kernel, rank, and nullity of a linear transformation. Many properties of isomorphism are covered, and strategies for determining isomorphism are detailed.

At the end of the course, we illustrate through several examples the matrix of a linear transformation. The change of basis matrix and similar matrices are also featured.

What You’ll Learn in Orthogonality

  • Orthogonality, Length, Unit Vectors
  • Orthonormal Vectors
  • Orthogonal Projections
  • Orthogonal Complement
  • From Pythagoras to Cauchy
  • The Gram–Schmidt Process
  • The QR Factorization
  • Orthogonal Transformations
  • Orthogonal Matrices
  • Orthonormal Bases
  • The Transpose of a Matrix
  • The Matrix of an Orthogonal Projection
  • Characterization of Orthogonal Complements
  • Characterization of Orthogonal Projections
  • Least-Squares Approximations
  • The Matrix of an Orthogonal Projection
  • Data Fitting
  • Inner Products and Inner Product Spaces
  • Norm, Orthogonality
  • Orthogonal Projections
  • Fourier Analysis

We begin this course by studying orthogonal vectors, the length of a vector, and unit vectors. This introduction leads us to an understanding of orthonormal vectors, including some of their properties. Next, we dive into orthogonal projections, including what they are and how to find them. We spend a great deal of attention on the formula for finding the orthogonal projection and also on the orthogonal complement. After that, we generalize several well-known theorems of real vector spaces, including the Pythagorean Theorem, the Cauchy-Schwarz inequality, and the Law of Cosines.

Next, we study one of the main topics of Linear Algebra, the Gram-Schmidt Process. We demonstrate this process with several examples, and we provide proof of this fundamental result. Now because the Gram-Schmidt process represents a change of basis (from a basis to an orthonormal basis), it is most easily described in terms of a change of basis matrix. Hence, an effective way to organize and record the work performed in the Gram-Schmidt process is via QR-factorization.

Next, we learn that a linear transformation that preserves the length of vectors is called an orthogonal transformation. Interestingly, orthogonal transformations preserve orthogonality. We give a detailed study of orthogonal transformations and their products and inverses. Towards the end of this lesson, we summarize orthogonal matrices and properties of the transpose.

The final lesson of this course is for studying inner product spaces. We define what an inner product is, what an inner product space is, and provide several examples. Then we cover the norm and orthogonal projections before giving a quick view of Fourier analysis, which includes Fourier coefficients and the Fourier approximation.

What You’ll Learn in Determinants

  • The Determinant of a 3 by 3 Matrix
  • Linearity Properties of the Determinant
  • The Determinant of an n by n Matrix
  • The Determinant of a Block Matrix
  • The Determinant of the Transpose
  • Determinant of a Product
  • The Determinant of an Inverse
  • Minors and Laplace Expansion
  • Determinants and Gauss–Jordan Elimination
  • The Determinant of a Linear Transformation
  • The Determinant as Area and Volume
  • The Determinant as Expansion Factor
  • Cramer’s Rule
  • Adjoint and Inverse of a Matrix

We begin this course with an introduction to 3 by 3 determinants and a discussion of Sarrus’s rule. We then illustrate several properties of the determinant before examining the definition of an n by n determinant using patterns and inversions. After this vital definition, we work through several examples, including the determinant of a triangular matrix and determinants of block matrices.

Once it is clearly understood what a determinant is, in the next lesson, we dive deep into the determinant’s properties. The linearity properties of the determinant and finding determinants using Gaussian elimination are examined. The determinant of products, powers, inverses, and similar matrices are also covered. Following this, we consider the determinant of a linear transformation.

In the final lesson, we study several special cases; for example, an orthogonal matrix’s determinant is either 1 or -1. We also explore the geometrical interpretations of the determinants, including areas and volumes. At the end of this lesson, we discuss the determinant as an expansion factor with linear transformations.

In conclusion, we motivate, prove, and demonstrate Cramer’s rule in solving systems of linear equations.

What You’ll Learn in Eigenvalues and Eigenvectors

  • Diagonalizable Matrices
  • Eigenvectors, Eigenvalues, and Eigenbases
  • Characterizations of Invertible Matrices
  • Dynamical Systems and Eigenvectors
  • Eigenvalues and Determinants
  • Characteristic Equation
  • Characteristic Polynomial
  • Algebraic Multiplicity of an Eigenvalue
  • Eigenvalues, Determinant, and Trace
  • Eigenspaces
  • Geometric Multiplicity
  • Eigenvalues and Similarity
  • Strategy for Diagonalization
  • Equilibria for Regular Transition Matrices
  • The Eigenvalues of a Linear Transformation
  • Complex Numbers
  • Fundamental Theorem of Algebra
  • Complex Eigenvalues and Eigenvectors
  • Trace, Determinant, and Eigenvalues
  • Stable Equilibrium
  • Dynamical Systems with Complex Eigenvalues

We begin this course with a study of diagonalizable matrices. Simply put, a linear transformation T is diagonalizable if the matrix of T (with respect to some basis) is diagonal. Since we wish to diagonalize a matrix, we need a thorough understanding of eigenvalues and eigenvectors. After examining these concepts (including eigenbases) and the properties of eigenvectors, we work through several examples of diagonalization. At the end of the first lesson, we add additional characterizations of invertible matrices and begin exploring dynamical systems.

For the next two lessons, we focus on finding eigenvalues and finding eigenvectors, respectively. These studies lead us to the concepts of the algebraic multiplicity of an eigenvalue and the geometric multiplicity of an eigenvalue. This work culminates in a thorough understanding of eigenbases and a strategy for diagonalization.

In the next lesson, we develop more on dynamic systems, including the equilibria for regular transition matrices and several other topics. At the end of the course, we examine complex eigenvalues and eigenvectors.

What You’ll Learn in Operators on Vector Spaces

  • Invariant Subspaces
  • Eigenvectors
  • Upper-Triangular Matrices
  • Eigenspaces
  • Diagonal Matrices
  • Operators on Inner Product Spaces
  • Self-Adjoint Operators
  • Normal Operators
  • The Spectral Theorem
  • Positive Operators
  • Isometries
  • Polar Decomposition
  • Singular Value Decomposition
  • Operators on Complex Vector Spaces
  • Generalized Eigenvectors
  • Nilpotent Operators
  • Generalized Eigenvectors
  • Nilpotent Operators
  • Characteristic Polynomial
  • Cayley-Hamilton Theorem
  • Decomposition of an Operator
  • Minimal Polynomial
  • Jordan Form
  • Operators on Real Vector Spaces
  • Complexification of a Vector Space
  • Complexification of an Operator
  • The Minimal Polynomial of the Complexification
  • Eigenvalues of the Complexification
  • Characteristic Polynomial of the Complexification
  • Normal Operators on Real Inner Product Spaces
  • Isometries on Real Inner Product Spaces

This advanced course is a continuation of these courses: Linear Transformations, Linear Spaces, Subspaces and Their Dimension, Orthogonality, and Eigenvalues and Eigenvectors.

In this course, we begin by studying operators on inner product spaces. First, we discuss self-adjoint and normal operators. Then we give an in-depth examination of the Spectral Theorem, both the complex and the real version. We finish this first lesson by detailing positive operators and isometries.

In the next lesson, we study the main topic of this course: operators on complex vector spaces. To do so, we start off by learning about generalized eigenvectors and nilpotent operators. We then work through many of the details and provide lots of examples. Now, we come to the heart of the matter: the decomposition of an operator. The details here are essential, and we work thoroughly on the material, including the theory of characteristic and minimal polynomials. After that, we celebrate the Cayley-Hamilton Theorem and then end with the Jordan Form of a matrix.

In the final lesson, we begin with the complexification of a real vector space. We go through this process and the resulting properties in detail before continuing on with operators of real inner product spaces. We end this course with a discussion on isometries on real inner product spaces.

What You’ll Learn in Jordan Canonical Forms

  • Fundamentals on Vector Spaces and Linear Transformations
  • Bases and Coordinates
  • Linear Transformations and Matrices
  • Some Special Matrices and Polynomials
  • Subspaces, Complements, and Invariant Subspaces
  • The Structure of a Linear Transformation
  • Eigenvalues, Eigenvectors, and Generalized Eigenvectors
  • The Minimal Polynomial
  • Reduction to Normal Form
  • The Diagonalizable Case
  • Reduction to Jordan Canonical Form
  • The ESP of a Linear Transformation
  • The Algorithm for a Jordan Canonical Form
  • More Examples

In this mostly self-contained course, we begin with a brisk review of the fundamentals of vector spaces and linear transformations. In the beginning, we focus on bases and coordinates and work through the details of linear transformations and matrices. Then we emphasize some special types of matrices that will play an important role later. We also discuss at length: polynomials, subspaces, complements, and invariant subspaces.

The goal in the next lesson is to understand the structure of a linear transformation. We begin by reviewing eigenvalues, eigenvectors, and generalized eigenvectors. Here all the theory is laid out for understanding the Jordan canonical form. This theory is an extended argument, so we carry it out in steps.

In the final lesson, we work through several examples to demonstrate an algorithm for finding Jordan canonical form and finding a Jordan basis.

linear algebra

Research and review topics in linear algebra. In these articles and videos, I begin with matrices, linear maps, vector spaces, and then continue.

Linear Algebra is an undergraduate or a graduate mathematics course. The undergraduate version often emphasizes applications and applying theorems, and the emphasis is usually placed on matrices, whereas linear maps are the graduate course’s focal point. Topics that undergraduate students learn in this course are solving linear systems of equations, vector spaces, bases, and dimensions. After that, I present linear transformations and matrices, kernel and image, orthogonality, determinants, and various applications. In the graduate approach, instructors expect students to prove theorems, and the exercises are usually in the form of proving statements.