Last time (here) we worked out this problem: there is a plane formed by linear combinations of two vectors a (2,2,1) and a* (1,0,0). We can write down a matrix A from a and a*, where the linear combinations of the columns of A form the plane:
We then consider a vector b that is not in the plane (3,4,4). Another way of stating this is to say that:
has no solutions. Then we ask, what is the linear projection of b onto the plane? Call that projection p. We want to decompose b into p, the part in the plane, and e, which is perpendicular to the plane (see the figure at the top).
Since e is perpendicular, this e is the shortest e that goes from a point in the plane "up" to the vector b. We solved the problem by deriving an equation for x, which is a vector whose coefficients are the linear combinations of the columns of A to give p:
The insight is that every vector in the plane is perpendicular to e:
which is exactly the same as:
So then we solved (in succession):
But it occurs to me that there is another way to think about the problem.
This plane formed by the combinations of a (2,2,1) and a* (1,0,0) will have a normal vector n, where e is some linear multiple of n. n can be found because its dot product with every vector in the plane is equal to zero. It is easy to show that
Probably the simplest method is to take the cross product a x a*. We write the two vectors as rows in a matrix with the unit vectors i, j, k as the first row (this isn't strictly legal, but it's easy to remember). The answer is the "determinant" of this 3 x 3 matrix:
Without bothering about normalizing to make n a unit vector, let's turn things around.
Consider the problem as one of projection onto a line (as we did in the simpler first part of the previous post). That is, think of e as the projection of b onto n. We're looking for z (a number) which tells how much of n we take to get e:
The equation to solve from last time was:
I just substitute the new symbols into the old equation.
Now it's easy:
which is exactly the same as before. And it seems like this might be really useful because as we ascend into n-dimensional space, the previous method will involve inverses and transposes of big matrices, while this method will still be just the projection of one vector onto another one. The only trick will be to find n.