Linear algebra/Orthogonal matrix

From testwiki
Revision as of 02:10, 17 January 2024 by imported>Guy vandegrift (See also)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Alternative notations
Q1=QT QTQ=QQT=I
Q__1=Q__T Q__TQ__1=Q__1Q__T=I__
Qjk1=QkjQjkT kQkiQkj1=kQik1Qjk=δij

A real square matrix is orthogonal (orthogonal[1]) if and only if its columns form an orthonormal basis in a Euclidean space in which all numbers are real-valued and dot product is defined in the usual fashion.[2][3] An orthonormal basis in an N dimensional space is one where, (1) all the basis vectors have unit magnitude.[4]

Fundamental properties

Visual understanding of multiplication by the transpose of a matrix. If A is an orthogonal matrix and B is its transpose, the ij-the element of the product AAT=0 because the i-th row of A is orthogonal to the j-th row of A.

Three important results that are easy to prove

Among the first things a novice should learn are those that are easy to prove.

Orthonormal basis vectors are hiding in plain sight

Theorem:

  • If the rows of a square matrix form an orthonormal set of (basis) vectors,
then the transpose of that matrix is its own inverse (𝐌T=𝐌1)

Visual understanding

Suppose the rows of a matrix form an orthonormal set of basis vectors, as shown in the i-th row in matrix A to the right. The ij-the element of the product AB takes the dot product of the i-th row of A with the j-th column of matrix B, as shown in the upper part of the diagram. in the diagram's upper part, the j-th column is higlighted in yellow. In the diagram's lower part, matrix B is replaced by it's transpose, which shifts the elements in column j to a row (highlighted in cyan.) This establishes that the product of A with the transpose of B creates elements that are the dot product of rows of A with rows of B.

If A is a orthogonal matrix and B is its transpose, this procedure creates matrix elements that are dot products among the rows of the orthogonal matrix.

Rigorous proof

This proof illustrates how subscripts are used to manipulate and understand tensors.

1. Suppose

𝐯𝐢=Σjvj𝐞^𝐣=ΣjMij𝐞^𝐣 is the i-th element of a orthonormal set of basis vectors.
Here 𝐞^𝐣 are the original unit vectors used to define the new set of unit vectors that extract from the rows of matrix 𝐌__

2. Now we relabel how we write the sums for 𝐯𝐢 and 𝐯𝐣 as follows:

𝐯𝐢=ΣαMiα𝐞^α
𝐯𝐣=ΣβMjβ𝐞^β
Hint: In the first of these two equations, I replace j by αbecause summed variables can be changed at will. Sometimes they are called "dummy variables" because they "do not speak" after the sum is done. For example, summing n from 1 to 3 equals 1+2+3, which is the same as summing m from 1 to 3. In the second one I relabeled my dummy variable as β because the same dummy variable cannot serve two purposes in a single expression.

3. This yields the following expression for the dot product between our two vectors:

𝐯𝐢𝐯𝐣=(αMiα𝐞^α)(βMjβ𝐞^β)
𝐯𝐢𝐯𝐣=(αMiα𝐞^α)(βMjβ𝐞^β)=αβ(𝐞^α𝐞^β)MiαMjβ

4. This last term introduces the Kronecker delta symbol:

𝐯𝐢𝐯𝐣=(αMiα𝐞^α)(βMjβ𝐞^β)=αβMiαMjβ𝐞^α𝐞^βδαβ=αMiαMjα

The last term almost looks like the product of the matrix with itself. It can be turned into a product using the transpose on the second term in the product, using Mjα=MαjT.

5. If M__ is orthogonal, then M__T=M__1, and we conclude that the rows of M__ (i.e., the vectors 𝐯𝐢) form an orthonormal collection of vectors (i.e. a "rotated" basis for the vector space.)

𝐯𝐢𝐯𝐣=(αMiα𝐞^α)(βMjβ𝐞^β)=αMiαMjα=αMiαMαjT=αMiαMαj1=δij

Change of basis for tensors

Template:Center

A common use of the orthogonal matrix is to express a vector in one reference frame into a "rotated"[8] frame.

Here, we let M__ denote any matrix (i.e. "tensor"), while R__ is any orthogonal matrix (typically a rotation.) Let v_ and p_ be two vectors, and let v_ and p_ represent the same vectors in a rotated reference frame.

Theorem
  • If   v_=R__v_,   then:  M__=R__M__R__1
Proof
  1. Define p_=M__v_.
  2. Assume v_=R__v_ and p_=R__p_.
  3. Do some tensor algebra and express p_ in terms of v_.

In this context, the only difference between the tensor and scalar algebras is that with tensors, vector's do not always commute: A__B__B__A__ does not always vanish.

Derivation of the rotation tensor

Rotation of basis vectors. Since it is an active transformation this sign on θ is opposite to the the case for rotating a point.
This image illustrates a proof for a passive transformation, based on the rules for the sine and cosine of the sum of two angles.

The rotation matrix usually the first orthogonal matrix students encounter. While it is conceptually easier to rotate vectors than to rotate a coordinate system, it is algebraically easier to rotate a coordinate system. From the figure, the unit vectors in a rotated reference frame obey:

x^=x^cosθy^sinθy^=x^sinθ+y^cosθ

Students will quickly see the sine and cosine components in this equation, but the minus sign might seem confusing. It comes from the fact that x^ has a negative component when projected along the y^ direction. Now express the vector V_, first in the unprimed coordinate system, then in primed:

V_=Vxx^+V_yy^

To complete the proof, substitute the expressions that expressed the (x^,y^) unit vectors in terms of the (x^,y^) unit vectors:

V_=Vx(x^cosθy^sinθ)x^+Vy(x^sinθ+y^cosθ)y^ V_=+Vxx^cosθVxy^sinθ+Vyx^sinθ+Vyy^cosθ

V_=+(Vxcosθ+Vysinθ)Vxx^+(Vxsinθ+Vycosθ)Vxy^

This latter expression solves our problem, as we were seeking an expression of the form, V_=Vxx^+Vyy^.

Note how in this formalism, there is no distinction between the primed and unprimed vector V_. This tends to confuse everyone, including the author. Such confusion can be avoided when writing a textbook or article. But in the free-wheeling world of both scientific literature, as well as wikis, such chaos cannot be avoided. That's why it is good to carefully read books.

Going back to the notation of many WMF pages, we have the following formula for the components of a vector if the coordinate system is rotated by θ about the z axis: [V'xV'y]=[cosθsinθsinθcosθ][VxVy]

See also

Template:Linear algebra/Rotation of axes

Notes

Template:Reflist


Template:Linear algebra/Rotation of axes

  1. The term "orthogonal" is confusing. A better word in this context would be orthonormal. See the lede sentence in w:special:Permalink/1181197344
  2. w:Special:Permalink/1181197344#Matrix_properties
  3. The physics student's first alternative to the "usual fashion" is the dot product in special relativity, where =xx+yy+zz=c2tt
  4. "Unit magnitude" means the dot product of the vector with itself equals 1
  5. Most of this page is based on https://en.wikipedia.org/w/index.php?title=Orthogonal_matrix&oldid=1028769520
  6. https://en.wikipedia.org/w/index.php?title=Permutation_matrix&oldid=1015641816#Properties
  7. https://en.wikipedia.org/w/index.php?title=Permutation_matrix&oldid=1015641816#Properties
  8. The quotation marks on "rotation' are intended to include orthogonal matrices that are also reflections of an axis through the origin.