Vector Space and subspaces

Vector Space

Spaces of vectors need to allow me to do certain operations. I can add them, multiply them. Every vector space has the 0 vector, the origin.

Examples of vector spaces :

R^2 - which is all 2 dimensional real vectors. [3,2], [0,0]. R^2 is basically the plane.

R^3 - all vectors with 3 components.

R^n - all column vectors with n real components.

NOT a vector space: because can't multiply by -5 for example. So it's not closed by multiplication for all real numbers.

Subspace: ex: a line within R^2.

The line must pass by the origin. Every subspace must do this.

Column space

= all the combinations of the columns.

Say you have a matrix A. How do you get a subspace? This A has its columns in R^3. If you take those columns, and all of their (linear) combinations, they form a subspace called columns space C(A).

You get a plane through the origin.

Another example:

The span of columns gives you all possible outputs.

The [0,0] will always be in the column space

Now let's go back to our Ax = b equation. Does Ax = b have a solution for every b? NO! Why? Because for instance we have 4 equations for 3 unknowns.

Which b's allow this system to be solved? One example is, aside from all b's being 0 is to take one of the columns of A. Then it's easy to find an x that satisfy's Ax=b. But these are special solutions.

I can solve Ax=b exactly when b is in the column space of A: C(A). Because the column space by definition contains all the combinations, it contains all the AX's. If b is a combination of the columns, then we can find x.

Also, because column 3 is dependant, it adds nothing new, A is a 2 dimensional subspace of R^4.

Null space

Same example as above. Nullspace of A = all solutions of X.

(the x's are in R^3).

Now I am interested in the x's. so :

To generalize, the solution is [c, c, -c] or c[1,1,-1].

The null space is this line in R^3 that goes through the origin.

We can say that solutions to Ax=0 always give a subspace. That's because if we have Av = 0 and Aw=0 then we have A(v + w) = 0 and Av + Aw = 0. also A x 12v = 0, etc. etc. Checks all criteria of being a subspace.

How do we compute Ax=0?

Let A be a matrix.

We apply elimination, and get this echelon form (staircase form, or upper triangle) matrix with 2 pivots.

(when doing the elimination, when there is nothing to be done on one column you just continue (column 2 had 2 0... oh well go to column 3, because you can't pivot to have a number in position 2,2.)

We say that the Rank of A = the number of pivots. In our example, Rank = 2.

Here we have 2 pivot columns and 2 free columns. I can assign any number to the variables x2 and x4 to solve Ax=0. Then I can solve the equations for x1 and x3.

Rank is the number of dimensions in the column space.

Conveniently, we chose variables x1=1 and x3=0 to help us solve the problem (we can chose whatever variable we like).

1 solution to Ax=0 (the null space) becomes: (actually I am really solving Ux=0 because of the transformation I did on A) this x below:

Here is our algorithm:

So all of our solutions for the null space are:

Do elimination.

Find which are the pivot columns which are the free columns. In our example, this tells us that variables 1 and 3 are pivot variables the other free variables.

Give 1 free variables the value 1 and the other 0. Then you find the solution.

Then you do the opposite, you give the other free variable the value 0 and the other 1, then find the solution.

All possible solutions become a combinations of those 2 solutions.

The solutions are actually all the possible combinations of those, so you'd add a plus sign in the equation above to get all possible solutions.

What is the number of free variables in general?

For a mxn matrix of rank r, we have n-r free variables.

Reduced row echelon form

Let take our U again

(BTW there is a row of 0's because row 3 is a combination of row 1 and 2. Elimination discovered that fact).

How can we do more cleaning on this matrix? We want 0's above and below our pivots, and pivots = 1.

R = rref(A) = reduced row echelon form of A.

Now, if we want to find the solutions to Rx=0, they would be the same as Ax=0 and Ux=0, they are all the same because all we did was subtract, multiply different equations which we are allowed to do.

Notice how there is an Identity matrix I within R, and some matrix F among the free columns:

So here is a typicall RREF:

Therefore:

Another example: Let's find the null space of matrix A

Now let's solve Ax=b. Below, this matrix is called the augmented matrix.

Let's do elimination..

0 = b3-b2-b1 it's the condition for solvability.

Ax=b is solvable when b is in C(A) - (column space of A).

Another way of saying it is, if a combination of rows of A gives 0 rows, then the same combination of entries of b must give 0.

So how do I find the solutions for the equations? Let's start by finding 1 solution, Xpartiuclar. We start by setting all free variables to 0. So we find our pivot and free columns. Then solve Ax=b for pivot variables.

x2 = 0 and x4 = 0 because they were in the columns 2 and 4 which where the free variable columns.

That's just 1 solution. How do we find them all?

Find the nullspace, then the complete solution = the 1 particular solution + the solutions for the nullspace.

In our example, the complete solution is (and we take the previous solution we had for Ax=0 because we're using the same matrix.

3 types of rank. Say we have m by n matrix A of rank r. (we know that r<=m and r<=n)

1. Full column rank.

This means that r = n. That also means there is a pivot in every columns. There are n pivots. And therefore, no free variables.

So what will be in the nullspace N(A)? Only the 0 vector. There are no free variables to give other vectors.

What about our solution to Ax=b? There is just 1 solution (Xparticular) if there is a solution. So there's either 1 or 0 solution. Example where r=n:

What's its rank? r=n < m. So r=2.

Its echelon form will have the identity matrix above, and 0's below.

It's because it is a matrix that has 2 independant rows. The other rows are combinations of the 1st two. For this matrix, what's the situation with Ax=b? We have 4 equations and only 2 x's for A. So there is absolutely not always a solutions. With a random choice, we'll have 0 solutions. We'll have a solution if it is a combination of the 2 cols. Ex: col1 + col2 = [4,3,7,7].

2. Full row rank

This means r = m < n. How many pivots? m. Every row has a pivot. There are more unknown variables than equations, so there are non zero solutions to Ax=0.

I can solve Ax=b for every b if there exists one.

We are left with n - r (or m-n) free variables. There will always be at least 1 free viable that is non 0.

3. Full rank

This means r = m = n. It is a square matrix and It's invertable. Also it's Row echelon = I

The null space for Matrix A is the 0 vector only. To solve Ax=b, what are the conditions for b (b1 and b2)? None. Because r = m like above, we can solve for every b (there is always a solution), and because r = n, there is a unique solution.

To summarize all 4 types of situations we can have:

The rank tells you everything about the number of solutions.

Span of the set of vectors

Definition

Intersection, sum and union of 2 vector spaces

Is the intersection of 2 subspaces a subspace? Yes. You usually get a smaller subspace but it is a subspace. The sum is in the subspace.

The Union is not usually a vector space - why?

Ex: Let's take 2 subspaces P and L. P is a plane, L is a line. P union L is just part of the line and part of the plane. If i add something from P and something from L, i'll be outside the union, so it can't be a subspace.

Linear dependence

When a vector is redundant, which means they don't add anything to our span, those vectors are linearly dependent. One of the vectors can be expressed as a linear combination of the others.

Redundant vectors

Degrees of freedom

Linear independence

Definition

When they are independent, rank = n, N(A) = vector 0 and there are no free variables.

When they are dependent, rank < n and there are free variables.

If each vector really adds another dimension to the span, then they are linearly independent.

Exercises

Basis

Definition

Basis for a space is a sequence of vectors v1, v2, ... vd with 2 properties:

  1. They are independent

  2. They span the space.

Another basis: there are many many possible basis. (Actually 3, 3 , 8 doesn't work, we would need another vector because with that, row 1 and 2 are the same, so its dimension is 2, not 3, so it's not actually a basis.)

R^n - n vectors give basis if the nxn matrix with those col's is invertible.

Given a space, every basis for the space has the same number of vectors.

Dimension

In our definition in the basis section:

Given a space, every basis for the space has the same number of vectors.

The dimension is number of vectors in a basis.

Examples

The space is the column space of A:

Do the vectors span the space of that matrix? Yes. That's by definition where they come from because the space is C(A).

Are they a basis for the column space, are they independent? No, there is something in the null space. Let's look at the null space: Let's find vector in the null space. Looking for solutions for Ax=0. One example is

Because -1 x col1 + -1 x col2 + 1 x col3 + 0 x col 4 = 0.

What is a basis for that column space? Answer= cols 1&2. Those are the pivots columns. 1 first column we put it in our basis, same for 2. 3 and 4 we can't because they are multiples of 1 or 2.

Therefore the rank of the matrix = 2. So: rank(A) = number of pivot columns = the dimension of the C(A). (it's the dimension of the subspace, which is the column space of A. We're not talking about the Matrix A.

so : dim C(A) = rank

What's the dimension of the nullspace? dimN(A) = number of free variables = n - r

Other definition from Brilliant.org:

Row space

= all the combinations of the rows (of A).

It's also all the combinations of the columns of the transpose of A, = C(A T).

Nullspace of the transpose of A

= N(A T) = the left nullspace of A.

Four subspaces

Columnspace C(A) in R^m. Its dimension is the rank, the basis are the pivot columns.

Rowspace C(A T) in R^n. Its dimension is also the rank.

Nullspace N(A) in R^n . The dimension is n-r, the basis are the special solutions, there is 1 basis for each free variable, that is = n - r.

Nullspace of the transpose N(A T) in R^m. The dimension is m - r.

Therefore: dim rowspace + dim nullspace = number of columns and dim colspace + dim N(AT) = number of rows.

How do you get a basis for the rowspace of A? Let A be a matrix, and we find R:

One thing we can say is that C(R) =/= C(A) because we did operations on rows, which preserves the rowspace, but not the column space. This means they have exactly the same vectors in them, they are just combinations of each other and they have 4 components each. Since that is true, then what is a basis for the R(R)? The 1st 2 rows, because the 3rd is independant (?), and it's also a basis for the R(A).

The basis we found on R, is the most cleaned possible form of the basis we can have, so it's the best basis.

Now let's look at the 4th space, N(AT). We also call this the left nullspace. Why?

Putting it on the left we had to make it a row vector instead of a column vector. When we multiply by A on both sides, we have to inverse the order in which we multiply.

How do we get a basis?

Let's take our Gauss-Jordan method again and apply it. Here we're working with a non square matrix. We want to find the rref of A with I beside it, this gives us R, and E. E is going to contain a record of what we did.

Matrix Space

Let's define a new vector space: A space where each vector is a matrix! It's the space with all matrices. The same ideas work as long as you can add and scale the vector - which in this case is a matrix.

How do we find the dimension? We need to find the basis.

Say we have the above 3 matrices. Those 3 matrices are basis, they are independant, you can add them, multiply. So they form a basis. Therefore, dim = 3. However the standard basis:

This is of dimension 9.

The dimension of S which has all symetrical matrices = 6.

The dimension of U which has all upper triangular matrices = 6 as well.

2 other spaces: union of S and U, and addition of S and U:

So dim S + dim U = dim (S union U) + dim (S + U).

Rank 1 matrix example: They are the building blocks of all matrices.

Where U is a column vector and V T is a row vector.

Example: Is the subset of rank 1 matrices a subspace of M?

Not, not usually, because the sum of 2 rank 1 matrices in that space will most likely be 2. (Max rank = 5 because we have 5x17 matrices.

Other example:we have S= [v1, v2, v3, v4] where:

What's the rank? what's the dimension?

We can write that equation in the form of Av=0, which means we're looking at the null space. This gives us:

Therefore, rank = 1 = r (there's just 1 column), and because we know dim N(A) = n - r , the dim of S = 4(columns) - 1(rank) = 3.

What's a basis for the nullspace? The special solution. To find the special sol, we find the 3 variables. 1st we find the pivot of A: which is the 1st 1 in A

The free variables are in v2, 3 and 4, we can choses whatever value.. THEREFORE, the basis for S:

The sum has to be 0 because Ax=0. There has to be 3 vectors because rank = 3.

What's the column space of A? The C(A) is a subspace of R^1 because m=1, there's just 1 component.

So C(A) = R^1.

What's the last subspace: N(A T)? N(A T) is the 0 vector.

One last vector space that isn't formed of vectors:

the y equations is a vector space. What is a basis of that space? A basis is, all the guys in the space are combinations of the basis. 1 basis is simply, cos(x) and sin(x).

Therefore, the dimension = 2.

Conclusion : cosx and sinx don't look like vectors they look like functions. But since we can scale and add them, they are LIKE vectors.

Dot product

Very useful geometric tool for understanding projections and understanding whether or not vectors tend to point in the same direction.

Geometric interpretation:

Project w on the line that passes the origin and the tip of v. Then multiply the length of the projection and the length of v = dot product.

On a deeper level, doting 2 vectors together is was to translate on of them into the world of transformations.

It's easier for us to understand that a vector isn't just an arrow in space, but as the physical embodiement of a linear transformation. It's as if a vector is conceptual shorthand for a certain transformation.

Inner products

Last updated