Solving Systems of Linear Equations March 25, 2012
Yet another university-related post, but I really enjoyed it so I thought I’d share: for a GUI- workshop I’m taking, we are given GUI-layout constraints as a system of linear equations, which we need to satisfy. To make life more interesting, some constraints are constant while some are parametric. There’s no magic here, just some linear algebra combined with Python’s overloadable nature to produce a nice and compact solver for these linear systems.
Naturally, you present your system as a coefficient matrix, which is Gauss-eliminated and then “solved” for a given set of variables. I took the Gauss-Jordan elimination code from Jarno Elonen and modified it to support MxN matrices (not necessarily square). Here’s a simple example of elimination:
But of course a reduced row-echelon matrix is not our final goal - we want to get the variable
assignments. For this, we have solve()
:
Modulo precision errors, we get x = 6
, y = -1
and z = -0.5
. If we have more equations than
variables, the “extraneous” equations must be linear combinations of previous ones, or a
contradiction will result. But what if we have less equations than variables? It means we have
some degrees of freedom… how would we handle that? It’s actually simple - instead of being
resolved to constant values, variables will be assigned dependent expressions:
As you can see now, z
is a free variable and x
and y
are dependent on it. Of course more
than one variable may be free and some variables may be independent of free variables. Once a value
for z
is known, we can “fully evaluate” the dependent expressions:
The code is available on my github page.
Note: all numbers are represented as Decimals
, to avoid loss of precision as much as possible,
and I’m using an “epsilon” value of 1E-20
to equate numbers to each other (meaning, x == y
iff
abs(x-y) <= epsilon
).