Inverting Matrices

Inverting a matrix turns out to be quite useful, although not for the classic example of solving a set of simultaneous equations, for which other, better, methods exist. In particular, with phased array antenna calculations, you need to invert admittance matrices (produced as an output from a MOM code, like NEC) into mutual impedance matrices.

The theory of matrix inversion, etc, is better left to textbooks. Suffice it to say that the problem is one of finding a matrix [B] such that [A][B] = I, where I is the identity matrix. There are a variety of published programs for doing matrix inversion, typically by elimination, described in the next section; for example, MS Basic came with example code for it, and of course, the Numerical Recipes books stuff.), although normally, the matrix isn't composed of complex numbers. Links are at the bottom of this page. You can also compute it explicitly by calculating the adjoint matrix from cofactors and scaling by the determinant, which is described in more detail below.

Inversion by elimination

For moderate and big matrices, the straightforward way is to do a form of Gaussian elimination.

Inversion by using the adjoint matrix

For small matrices (2,3,4) calculating the inverse by scaling the adjoint is easier. The adjoint matrix is computed by taking the transpose of a matrix where each element is cofactor of the corresponding element in the source matrix. The cofactor is the determinant of the matrix created by taking the original matrix and removing the row and column for the element you are calculating the cofactor of. The signs of the cofactors alternate, just as when computing the determinant

For example, if the original matrix [M] is

		a b c
		d e f
		g h i

the cofactor of the upper left element is

     |e f|
     |h i|

which is = (ei - hf)

the cofactor of the upper center element is

     _ |d f|
       |g i|

= - (di - gf)

the cofactor of the upper right element is

     |d e|
     |g h|

= (dh - ge)

 

and the determinant is simply det(M) = a(ei-hf)-b(di - gf)+c(dh-ge). If we label the cofactors in the above array as A, B, C, etc. corresponding to the elements, the adjoint matrix would be:

	A D G
   	B E H
  	C F I

The inverse of the original matrix is the adjoint, scaled by 1/det(M).

I've built a few Excel spreadsheets to calculate the inverses of 2x2, 3x3, and 4x4 matrices, using the above method and using Excel's complex math functions. Download them here: (inv2x2.xls 15 kb) (inv3x3.xls 20 kb) (inv4x4.xls 29kb). Note that this technique is computationally expensive because a lot of the multiplies get repeated (for instance, you calculate cofactors to calculate the determinant by Cramer's rule, then calculate them again for the adjoint), and it probably scales as O(n!).

Links and references

Here are some links I found (for which I make no claims of quality) just doing a search for "matrix inversion":

A textbook reference (from my undergrad matrix math course) which describes all of the above:

Reiner, I., Introduction to Matrix Theory and Linear Algebra, Holt Rinehart Winston, 1971


matinv.htm - 28 June 2000 - Jim Lux
revised 25 Jan 2001 (fixed missing - in middle cofactor of 3x3 determinant)