Matrices (linear algebra)

Creating Matrices

The linear algebra module is designed to be as simple as possible. First, we import and declare our first Matrix object:

>>> init_printing(pretty_print=True, wrap_line=False, no_global=True)
>>> M = Matrix([[1, 0, 0], [0, 0, 0]])
>>> M
⎡1  0  0⎤
⎢       ⎥
⎣0  0  0⎦
>>> Matrix([M, (0, 0, -1)])
⎡1  0  0 ⎤
⎢        ⎥
⎢0  0  0 ⎥
⎢        ⎥
⎣0  0  -1⎦
>>> Matrix([[1, 2, 3]])
[1 2 3]
>>> Matrix([1, 2, 3])
⎡1⎤
⎢ ⎥
⎢2⎥
⎢ ⎥
⎣3⎦

In addition to creating a matrix from a list of appropriately-sized lists and/or matrices, Diofant also supports more advanced methods of matrix creation including a single list of values and dimension inputs:

>>> Matrix(2, 3, [1, 2, 3, 4, 5, 6])
⎡1  2  3⎤
⎢       ⎥
⎣4  5  6⎦

More interesting (and useful), is the ability to use a 2-variable function (or lambda) to create a matrix. Here we create an indicator function which is 1 on the diagonal and then use it to make the identity matrix:

>>> def f(i, j):
...     if i == j:
...         return 1
...     else:
...         return 0
...
>>> Matrix(4, 4, f)
⎡1  0  0  0⎤
⎢          ⎥
⎢0  1  0  0⎥
⎢          ⎥
⎢0  0  1  0⎥
⎢          ⎥
⎣0  0  0  1⎦

Finally let’s use lambda to create a 1-line matrix with 1’s in the even permutation entries:

>>> Matrix(3, 4, lambda i, j: 1 - (i + j) % 2)
⎡1  0  1  0⎤
⎢          ⎥
⎢0  1  0  1⎥
⎢          ⎥
⎣1  0  1  0⎦

There are also a couple of special constructors for quick matrix construction: eye is the identity matrix, zeros and ones for matrices of all zeros and ones, respectively, and diag to put matrices or elements along the diagonal:

>>> eye(4)
⎡1  0  0  0⎤
⎢          ⎥
⎢0  1  0  0⎥
⎢          ⎥
⎢0  0  1  0⎥
⎢          ⎥
⎣0  0  0  1⎦
>>> zeros(2)
⎡0  0⎤
⎢    ⎥
⎣0  0⎦
>>> zeros(2, 5)
⎡0  0  0  0  0⎤
⎢             ⎥
⎣0  0  0  0  0⎦
>>> ones(3)
⎡1  1  1⎤
⎢       ⎥
⎢1  1  1⎥
⎢       ⎥
⎣1  1  1⎦
>>> ones(1, 3)
[1  1  1]
>>> diag(1, Matrix([[1, 2], [3, 4]]))
⎡1  0  0⎤
⎢       ⎥
⎢0  1  2⎥
⎢       ⎥
⎣0  3  4⎦

Basic Manipulation

While learning to work with matrices, let’s choose one where the entries are readily identifiable. One useful thing to know is that while matrices are 2-dimensional, the storage is not and so it is allowable - though one should be careful - to access the entries as if they were a 1-d list.

>>> M = Matrix(2, 3, [1, 2, 3, 4, 5, 6])
>>> M[4]
5

Now, the more standard entry access is a pair of indices which will always return the value at the corresponding row and column of the matrix:

>>> M[1, 2]
6
>>> M[0, 0]
1
>>> M[1, 1]
5

Since this is Python we’re also able to slice submatrices; slices always give a matrix in return, even if the dimension is 1 x 1:

>>> M[0:2, 0:2]
⎡1  2⎤
⎢    ⎥
⎣4  5⎦
>>> M[2:2, 2]
[]
>>> M[:, 2]
⎡3⎤
⎢ ⎥
⎣6⎦
>>> M[:1, 2]
[3]

In the second example above notice that the slice 2:2 gives an empty range. Note also (in keeping with 0-based indexing of Python) the first row/column is 0.

You cannot access rows or columns that are not present unless they are in a slice:

>>> M[:, 10]  # the 10-th column (not there)
Traceback (most recent call last):
...
IndexError: Index out of range: a[[0, 10]]
>>> M[:, 10:11]  # the 10-th column (if there)
[]
>>> M[:, :10]  # all columns up to the 10-th
⎡1  2  3⎤
⎢       ⎥
⎣4  5  6⎦

Slicing an empty matrix works as long as you use a slice for the coordinate that has no size:

>>> Matrix(0, 3, [])[:, 1]
[]

Slicing gives a copy of what is sliced, so modifications of one object do not affect the other:

>>> M2 = M[:, :]
>>> M2[0, 0] = 100
>>> M[0, 0] == 100
False

Notice that changing M2 didn’t change M. Since we can slice, we can also assign entries:

>>> M = Matrix(([1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]))
>>> M
⎡1   2   3   4 ⎤
⎢              ⎥
⎢5   6   7   8 ⎥
⎢              ⎥
⎢9   10  11  12⎥
⎢              ⎥
⎣13  14  15  16⎦
>>> M[2, 2] = M[0, 3] = 0
>>> M
⎡1   2   3   0 ⎤
⎢              ⎥
⎢5   6   7   8 ⎥
⎢              ⎥
⎢9   10  0   12⎥
⎢              ⎥
⎣13  14  15  16⎦

as well as assign slices:

>>> M = Matrix(([1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]))
>>> M[2:, 2:] = Matrix(2, 2, lambda i, j: 0)
>>> M
⎡1   2   3  4⎤
⎢            ⎥
⎢5   6   7  8⎥
⎢            ⎥
⎢9   10  0  0⎥
⎢            ⎥
⎣13  14  0  0⎦

All the standard arithmetic operations are supported:

>>> M = Matrix(([1, 2, 3], [4, 5, 6], [7, 8, 9]))
>>> M - M
⎡0  0  0⎤
⎢       ⎥
⎢0  0  0⎥
⎢       ⎥
⎣0  0  0⎦
>>> M + M
⎡2   4   6 ⎤
⎢          ⎥
⎢8   10  12⎥
⎢          ⎥
⎣14  16  18⎦
>>> M * M
⎡30   36   42 ⎤
⎢             ⎥
⎢66   81   96 ⎥
⎢             ⎥
⎣102  126  150⎦
>>> M2 = Matrix(3, 1, [1, 5, 0])
>>> M*M2
⎡11⎤
⎢  ⎥
⎢29⎥
⎢  ⎥
⎣47⎦
>>> M**2
⎡30   36   42 ⎤
⎢             ⎥
⎢66   81   96 ⎥
⎢             ⎥
⎣102  126  150⎦

As well as some useful vector operations:

>>> del M[0, :]
>>> M
⎡4  5  6⎤
⎢       ⎥
⎣7  8  9⎦
>>> del M[:, 1]
>>> M
⎡4  6⎤
⎢    ⎥
⎣7  9⎦
>>> v1 = Matrix([1, 2, 3])
>>> v2 = Matrix([4, 5, 6])
>>> v3 = v1.cross(v2)
>>> v1.dot(v2)
32
>>> v2.dot(v3)
0
>>> v1.dot(v3)
0

We can also ‘’glue’’ together matrices of the appropriate size:

>>> M1 = eye(3)
>>> M2 = zeros(3, 4)
>>> M1.row_join(M2)
⎡1  0  0  0  0  0  0⎤
⎢                   ⎥
⎢0  1  0  0  0  0  0⎥
⎢                   ⎥
⎣0  0  1  0  0  0  0⎦
>>> M3 = zeros(4, 3)
>>> M1.col_join(M3)
⎡1  0  0⎤
⎢       ⎥
⎢0  1  0⎥
⎢       ⎥
⎢0  0  1⎥
⎢       ⎥
⎢0  0  0⎥
⎢       ⎥
⎢0  0  0⎥
⎢       ⎥
⎢0  0  0⎥
⎢       ⎥
⎣0  0  0⎦

Operations on entries

We are not restricted to having multiplication between two matrices:

>>> M = eye(3)
>>> 2*M
⎡2  0  0⎤
⎢       ⎥
⎢0  2  0⎥
⎢       ⎥
⎣0  0  2⎦
>>> 3*M
⎡3  0  0⎤
⎢       ⎥
⎢0  3  0⎥
⎢       ⎥
⎣0  0  3⎦

but we can also apply functions to our matrix entries using applyfunc(). Here we’ll declare a function that double any input number. Then we apply it to the 3x3 identity matrix:

>>> def f(x):
...     return 2*x
>>> eye(3).applyfunc(f)
⎡2  0  0⎤
⎢       ⎥
⎢0  2  0⎥
⎢       ⎥
⎣0  0  2⎦

One more useful matrix-wide entry application function is the substitution function. Let’s declare a matrix with symbolic entries then substitute a value. Remember we can substitute anything - even another symbol!:

>>> M = eye(3) * x
>>> M
⎡x  0  0⎤
⎢       ⎥
⎢0  x  0⎥
⎢       ⎥
⎣0  0  x⎦
>>> M.subs({x: 4})
⎡4  0  0⎤
⎢       ⎥
⎢0  4  0⎥
⎢       ⎥
⎣0  0  4⎦
>>> M.subs({x: y})
⎡y  0  0⎤
⎢       ⎥
⎢0  y  0⎥
⎢       ⎥
⎣0  0  y⎦

Linear algebra

Now that we have the basics out of the way, let’s see what we can do with the actual matrices. Of course, one of the first things that comes to mind is the determinant:

>>> M = Matrix(([1, 2, 3], [3, 6, 2], [2, 0, 1]))
>>> M.det()
-28
>>> M2 = eye(3)
>>> M2.det()
1
>>> M3 = Matrix(([1, 0, 0], [1, 0, 0], [1, 0, 0]))
>>> M3.det()
0

Another common operation is the inverse: In Diofant, this is computed by Gaussian elimination by default (for dense matrices) but we can specify it be done by \(LU\) decomposition as well:

>>> M2.inv()
⎡1  0  0⎤
⎢       ⎥
⎢0  1  0⎥
⎢       ⎥
⎣0  0  1⎦
>>> M2.inv(method='LU')
⎡1  0  0⎤
⎢       ⎥
⎢0  1  0⎥
⎢       ⎥
⎣0  0  1⎦
>>> M.inv(method='LU')
⎡-3/14  1/14  1/2 ⎤
⎢                 ⎥
⎢-1/28  5/28  -1/4⎥
⎢                 ⎥
⎣ 3/7   -1/7   0  ⎦
>>> M * M.inv(method='LU')
⎡1  0  0⎤
⎢       ⎥
⎢0  1  0⎥
⎢       ⎥
⎣0  0  1⎦

We can perform a \(QR\) factorization which is handy for solving systems:

>>> A = Matrix([[1, 1, 1], [1, 1, 3], [2, 3, 4]])
>>> Q, R = A.QRdecomposition()
>>> Q
⎡  ___     ___      ___ ⎤
⎢╲╱ 6   -╲╱ 3    -╲╱ 2  ⎥
⎢─────  ───────  ───────⎥
⎢  6       3        2   ⎥
⎢                       ⎥
⎢  ___     ___      ___ ⎥
⎢╲╱ 6   -╲╱ 3     ╲╱ 2  ⎥
⎢─────  ───────   ───── ⎥
⎢  6       3        2   ⎥
⎢                       ⎥
⎢  ___     ___          ⎥
⎢╲╱ 6    ╲╱ 3           ⎥
⎢─────   ─────      0   ⎥
⎣  3       3            ⎦
>>> R
⎡           ___         ⎤
⎢  ___  4⋅╲╱ 6       ___⎥
⎢╲╱ 6   ───────  2⋅╲╱ 6 ⎥
⎢          3            ⎥
⎢                       ⎥
⎢          ___          ⎥
⎢        ╲╱ 3           ⎥
⎢  0     ─────      0   ⎥
⎢          3            ⎥
⎢                       ⎥
⎢                   ___ ⎥
⎣  0       0      ╲╱ 2  ⎦
>>> Q*R
⎡1  1  1⎤
⎢       ⎥
⎢1  1  3⎥
⎢       ⎥
⎣2  3  4⎦

In addition to the solvers in the solver.py file, we can solve the system Ax=b by passing the b vector to the matrix A’s LUsolve function. Here we’ll cheat a little choose A and x then multiply to get b. Then we can solve for x and check that it’s correct:

>>> A = Matrix([[2, 3, 5], [3, 6, 2], [8, 3, 6]])
>>> x = Matrix(3, 1, [3, 7, 5])
>>> b = A*x
>>> soln = A.LUsolve(b)
>>> soln
⎡3⎤
⎢ ⎥
⎢7⎥
⎢ ⎥
⎣5⎦

There’s also a nice Gram-Schmidt orthogonalizer which will take a set of vectors and orthogonalize them with respect to another. There is an optional argument which specifies whether or not the output should also be normalized, it defaults to False. Let’s take some vectors and orthogonalize them - one normalized and one not:

>>> L = [Matrix([2, 3, 5]), Matrix([3, 6, 2]), Matrix([8, 3, 6])]
>>> out1 = GramSchmidt(L)
>>> out2 = GramSchmidt(L, True)

Let’s take a look at the vectors:

>>> for i in out1:
...     print(i)
...
Matrix([
[2],
[3],
[5]])
Matrix([
[ 23/19],
[ 63/19],
[-47/19]])
Matrix([
[ 1692/353],
[-1551/706],
[ -423/706]])
>>> for i in out2:
...     print(i)
...
Matrix([
[  sqrt(38)/19],
[3*sqrt(38)/38],
[5*sqrt(38)/38]])
Matrix([
[ 23*sqrt(6707)/6707],
[ 63*sqrt(6707)/6707],
[-47*sqrt(6707)/6707]])
Matrix([
[ 12*sqrt(706)/353],
[-11*sqrt(706)/706],
[ -3*sqrt(706)/706]])

We can spot-check their orthogonality with dot() and their normality with norm():

>>> out1[0].dot(out1[1])
0
>>> out1[0].dot(out1[2])
0
>>> out1[1].dot(out1[2])
0
>>> out2[0].norm()
1
>>> out2[1].norm()
1
>>> out2[2].norm()
1

So there is quite a bit that can be done with the module including eigenvalues, eigenvectors, nullspace calculation, cofactor expansion tools, and so on. From here one might want to look over the matrices.py file for all functionality.

MatrixBase Class Reference

class diofant.matrices.matrices.MatrixBase[source]

Base class for matrices.

property C

By-element conjugation.

property D

Return Dirac conjugate (if self.rows == 4).

Examples

>>> m = Matrix((0, 1 + I, 2, 3))
>>> m.D
Matrix([[0, 1 - I, -2, -3]])
>>> m = (eye(4) + I*eye(4))
>>> m[0, 3] = 2
>>> m.D
Matrix([
[1 - I,     0,      0,      0],
[    0, 1 - I,      0,      0],
[    0,     0, -1 + I,      0],
[    2,     0,      0, -1 + I]])

If the matrix does not have 4 rows an AttributeError will be raised because this property is only defined for matrices with 4 rows.

>>> Matrix(eye(2)).D
Traceback (most recent call last):
...
AttributeError: Matrix has no attribute D.

See also

conjugate

By-element conjugation

H

Hermite conjugation

property H

Return Hermite conjugate.

Examples

>>> m = Matrix((0, 1 + I, 2, 3))
>>> m
Matrix([
[    0],
[1 + I],
[    2],
[    3]])
>>> m.H
Matrix([[0, 1 - I, 2, 3]])

See also

conjugate

By-element conjugation

D

Dirac conjugation

LDLdecomposition()[source]

Returns the LDL Decomposition (L, D) of matrix A, such that L * D * L.T == A This method eliminates the use of square root. Further this ensures that all the diagonal entries of L are 1. A must be a square, symmetric, positive-definite and non-singular matrix.

Examples

>>> A = Matrix(((25, 15, -5), (15, 18, 0), (-5, 0, 11)))
>>> L, D = A.LDLdecomposition()
>>> L
Matrix([
[   1,   0, 0],
[ 3/5,   1, 0],
[-1/5, 1/3, 1]])
>>> D
Matrix([
[25, 0, 0],
[ 0, 9, 0],
[ 0, 0, 9]])
>>> L * D * L.T * A.inv() == eye(A.rows)
True
LDLsolve(rhs)[source]

Solves Ax = B using LDL decomposition, for a general square and non-singular matrix.

For a non-square matrix with rows > cols, the least squares solution is returned.

Examples

>>> A = eye(2)*2
>>> B = Matrix([[1, 2], [3, 4]])
>>> A.LDLsolve(B) == B/2
True
LUdecomposition(iszerofunc=<function _iszero>)[source]

Returns the decomposition LU and the row swaps p.

Examples

>>> a = Matrix([[4, 3], [6, 3]])
>>> L, U, _ = a.LUdecomposition()
>>> L
Matrix([
[  1, 0],
[3/2, 1]])
>>> U
Matrix([
[4,    3],
[0, -3/2]])
LUdecompositionFF()[source]

Compute a fraction-free LU decomposition.

Returns 4 matrices P, L, D, U such that PA = L D**-1 U. If the elements of the matrix belong to some integral domain I, then all elements of L, D and U are guaranteed to belong to I.

Reference
  • W. Zhou & D.J. Jeffrey, “Fraction-free matrix factors: new forms for LU and QR factors”. Frontiers in Computer Science in China, Vol 2, no. 1, pp. 67-80, 2008.

LUdecomposition_Simple(iszerofunc=<function _iszero>)[source]

Returns A comprised of L, U (L’s diag entries are 1) and p which is the list of the row swaps (in order).

LUsolve(rhs, iszerofunc=<function _iszero>)[source]

Solve the linear system Ax = rhs for x where A = self.

This is for symbolic matrices, for real or complex ones use mpmath.lu_solve or mpmath.qr_solve.

QRdecomposition()[source]

Return Q, R where A = Q*R, Q is orthogonal and R is upper triangular.

Examples

This is the example from wikipedia:

>>> A = Matrix([[12, -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> Q, R = A.QRdecomposition()
>>> Q
Matrix([
[ 6/7, -69/175, -58/175],
[ 3/7, 158/175,   6/175],
[-2/7,    6/35,  -33/35]])
>>> R
Matrix([
[14,  21, -14],
[ 0, 175, -70],
[ 0,   0,  35]])
>>> A == Q*R
True

QR factorization of an identity matrix:

>>> A = Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
>>> Q, R = A.QRdecomposition()
>>> Q
Matrix([
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
>>> R
Matrix([
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
QRsolve(b)[source]

Solve the linear system ‘Ax = b’.

‘self’ is the matrix ‘A’, the method argument is the vector ‘b’. The method returns the solution vector ‘x’. If ‘b’ is a matrix, the system is solved for each column of ‘b’ and the return value is a matrix of the same shape as ‘b’.

This method is slower (approximately by a factor of 2) but more stable for floating-point arithmetic than the LUsolve method. However, LUsolve usually uses an exact arithmetic, so you don’t need to use QRsolve.

This is mainly for educational purposes and symbolic matrices, for real (or complex) matrices use mpmath.qr_solve.

property T

Matrix transposition.

add(b)[source]

Return self + b.

adjoint()[source]

Conjugate transpose or Hermitian conjugation.

adjugate(method='berkowitz')[source]

Returns the adjugate matrix.

Adjugate matrix is the transpose of the cofactor matrix.

https://en.wikipedia.org/wiki/Adjugate

atoms(*types)[source]

Returns the atoms that form the current object.

Examples

>>> Matrix([[x]])
Matrix([[x]])
>>> _.atoms()
{x}
berkowitz()[source]

The Berkowitz algorithm.

Given N x N matrix with symbolic content, compute efficiently coefficients of characteristic polynomials of ‘self’ and all its square sub-matrices composed by removing both i-th row and column, without division in the ground domain.

This method is particularly useful for computing determinant, principal minors and characteristic polynomial, when ‘self’ has complicated coefficients e.g. polynomials. Semi-direct usage of this algorithm is also important in computing efficiently sub-resultant PRS.

Assuming that M is a square matrix of dimension N x N and I is N x N identity matrix, then the following following definition of characteristic polynomial is begin used:

charpoly(M) = det(t*I - M)

As a consequence, all polynomials generated by Berkowitz algorithm are monic.

>>> M = Matrix([[x, y, z], [1, 0, 0], [y, z, x]])
>>> p, q, r = M.berkowitz()
>>> p  # 1 x 1 M's sub-matrix
(1, -x)
>>> q  # 2 x 2 M's sub-matrix
(1, -x, -y)
>>> r  # 3 x 3 M's sub-matrix
(1, -2*x, x**2 - y*z - y, x*y - z**2)

For more information on the implemented algorithm refer to:

[1] S.J. Berkowitz, On computing the determinant in small

parallel time using a small number of processors, ACM, Information Processing Letters 18, 1984, pp. 147-150

[2] M. Keber, Division-Free computation of sub-resultants

using Bezout matrices, Tech. Report MPI-I-2006-1-006, Saarbrücken, 2006

berkowitz_charpoly(x=Dummy('lambda'), simplify=<function simplify>)[source]

Computes characteristic polynomial minors using Berkowitz method.

A PurePoly is returned so using different variables for x does not affect the comparison or the polynomials:

Examples

>>> A = Matrix([[1, 3], [2, 0]])
>>> A.berkowitz_charpoly(x) == A.berkowitz_charpoly(y)
True

Specifying x is optional; a Dummy with name lambda is used by default (which looks good when pretty-printed in unicode):

>>> A.berkowitz_charpoly().as_expr()
_lambda**2 - _lambda - 6

Be sure your provided x doesn’t clash with existing symbols:

>>> A = Matrix([[1, 2], [x, 0]])
>>> A.charpoly().as_expr()
-2*x + _lambda**2 - _lambda
>>> A.charpoly(x).as_expr()
Traceback (most recent call last):
...
GeneratorsError: polynomial ring and it's ground domain share generators
>>> A.charpoly(y).as_expr()
-2*x + y**2 - y

See also

berkowitz

berkowitz_det()[source]

Computes determinant using Berkowitz method.

See also

det, berkowitz

berkowitz_eigenvals(**flags)[source]

Computes eigenvalues of a Matrix using Berkowitz method.

See also

berkowitz

berkowitz_minors()[source]

Computes principal minors using Berkowitz method.

See also

berkowitz

charpoly(x=Dummy('lambda'), simplify=<function simplify>)[source]

Computes characteristic polynomial minors using Berkowitz method.

A PurePoly is returned so using different variables for x does not affect the comparison or the polynomials:

Examples

>>> A = Matrix([[1, 3], [2, 0]])
>>> A.berkowitz_charpoly(x) == A.berkowitz_charpoly(y)
True

Specifying x is optional; a Dummy with name lambda is used by default (which looks good when pretty-printed in unicode):

>>> A.berkowitz_charpoly().as_expr()
_lambda**2 - _lambda - 6

Be sure your provided x doesn’t clash with existing symbols:

>>> A = Matrix([[1, 2], [x, 0]])
>>> A.charpoly().as_expr()
-2*x + _lambda**2 - _lambda
>>> A.charpoly(x).as_expr()
Traceback (most recent call last):
...
GeneratorsError: polynomial ring and it's ground domain share generators
>>> A.charpoly(y).as_expr()
-2*x + y**2 - y

See also

berkowitz

cholesky()[source]

Returns the Cholesky decomposition L of a matrix A such that L * L.T = A

A must be a square, symmetric, positive-definite and non-singular matrix.

Examples

>>> A = Matrix(((25, 15, -5), (15, 18, 0), (-5, 0, 11)))
>>> A.cholesky()
Matrix([
[ 5, 0, 0],
[ 3, 3, 0],
[-1, 1, 3]])
>>> A.cholesky() * A.cholesky().T
Matrix([
[25, 15, -5],
[15, 18,  0],
[-5,  0, 11]])
cholesky_solve(rhs)[source]

Solves Ax = B using Cholesky decomposition, for a general square non-singular matrix. For a non-square matrix with rows > cols, the least squares solution is returned.

cofactor(i, j, method='berkowitz')[source]

Calculate the cofactor of an element.

cofactorMatrix(method='berkowitz')[source]

Return a matrix containing the cofactor of each element.

col_insert(pos, mti)[source]

Insert one or more columns at the given column position.

Examples

>>> M = zeros(3)
>>> V = ones(3, 1)
>>> M.col_insert(1, V)
Matrix([
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0]])

See also

row_insert

col_join(bott)[source]

Concatenates two matrices along self’s last and bott’s first row

Examples

>>> M = zeros(3)
>>> V = ones(1, 3)
>>> M.col_join(V)
Matrix([
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[1, 1, 1]])

See also

row_join

condition_number()[source]

Returns the condition number of a matrix.

This is the maximum singular value divided by the minimum singular value

Examples

>>> A = Matrix([[1, 0, 0], [0, 10, 0], [0, 0, Rational(1, 10)]])
>>> A.condition_number()
100

See also

singular_values

conjugate()[source]

By-element conjugation.

copy()[source]

Returns the copy of a matrix.

cross(b)[source]

Return the cross product of \(self\) and \(b\) relaxing the condition of compatible dimensions: if each has 3 elements, a matrix of the same type and shape as \(self\) will be returned. If \(b\) has the same shape as \(self\) then common identities for the cross product (like \(a x b = - b x a\)) will hold.

det(method='bareiss')[source]

Computes the matrix determinant using the method “method”.

Possible values for “method”:

bareiss … det_bareis berkowitz … berkowitz_det det_LU … det_LU_decomposition

det_LU_decomposition()[source]

Compute matrix determinant using LU decomposition

Note that this method fails if the LU decomposition itself fails. In particular, if the matrix has no inverse this method will fail.

TODO: Implement algorithm for sparse matrices (SFF), Hong R. Lee, B.David Saunders, Fraction Free Gaussian Elimination for Sparse Matrices, In Journal of Symbolic Computation, Volume 19, Issue 5, 1995, Pages 393-402, ISSN 0747-7171, https://www.sciencedirect.com/science/article/pii/S074771718571022X.

det_bareiss()[source]

Compute matrix determinant using Bareiss’ fraction-free algorithm which is an extension of the well known Gaussian elimination method. This approach is best suited for dense symbolic matrices and will result in a determinant with minimal number of fractions. It means that less term rewriting is needed on resulting formulae.

TODO: Implement algorithm for sparse matrices (SFF), Hong R. Lee, B.David Saunders, Fraction Free Gaussian Elimination for Sparse Matrices, In Journal of Symbolic Computation, Volume 19, Issue 5, 1995, Pages 393-402, ISSN 0747-7171, https://www.sciencedirect.com/science/article/pii/S074771718571022X.

See also

det, berkowitz_det

diagonal_solve(rhs)[source]

Solves Ax = B efficiently, where A is a diagonal Matrix, with non-zero diagonal entries.

Examples

>>> A = eye(2)*2
>>> B = Matrix([[1, 2], [3, 4]])
>>> A.diagonal_solve(B) == B/2
True
diagonalize(reals_only=False, sort=False, normalize=False)[source]

Return (P, D), where D is diagonal and

D = P^-1 * M * P

where M is current matrix.

Examples

>>> m = Matrix(3, 3, [1, 2, 0, 0, 3, 0, 2, -4, 2])
>>> m
Matrix([
[1,  2, 0],
[0,  3, 0],
[2, -4, 2]])
>>> (P, D) = m.diagonalize()
>>> D
Matrix([
[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
>>> P
Matrix([
[-1, 0, -1],
[ 0, 0, -1],
[ 2, 1,  2]])
>>> P.inv() * m * P
Matrix([
[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
diff(*args)[source]

Calculate the derivative of each element in the matrix.

Examples

>>> M = Matrix([[x, y], [1, 0]])
>>> M.diff(x)
Matrix([
[1, 0],
[0, 0]])

See also

integrate, limit

dot(b)[source]

Return the dot product of Matrix self and b relaxing the condition of compatible dimensions: if either the number of rows or columns are the same as the length of b then the dot product is returned. If self is a row or column vector, a scalar is returned. Otherwise, a list of results is returned (and in that case the number of columns in self must match the length of b).

Examples

>>> M = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> v = [1, 1, 1]
>>> M[0, :].dot(v)
6
>>> M[:, 0].dot(v)
12
>>> M.dot(v)
[6, 15, 24]
dual()[source]

Returns the dual of a matrix, which is:

\((1/2)*levicivita(i, j, k, l)*M(k, l)\) summed over indices \(k\) and \(l\)

Since the levicivita method is anti_symmetric for any pairwise exchange of indices, the dual of a symmetric matrix is the zero matrix. Strictly speaking the dual defined here assumes that the ‘matrix’ \(M\) is a contravariant anti_symmetric second rank tensor, so that the dual is a covariant second rank tensor.

eigenvals(**flags)[source]

Return eigen values using the berkowitz_eigenvals routine.

Since the roots routine doesn’t always work well with Floats, they will be replaced with Rationals before calling that routine. If this is not desired, set flag rational to False.

eigenvects(**flags)[source]

Return list of triples (eigenval, multiplicity, basis).

The flag simplify has two effects:

1) if bool(simplify) is True, as_content_primitive() will be used to tidy up normalization artifacts; 2) if nullspace needs simplification to compute the basis, the simplify flag will be passed on to the nullspace routine which will interpret it there.

If the matrix contains any Floats, they will be changed to Rationals for computation purposes, but the answers will be returned after being evaluated with evalf. If it is desired to removed small imaginary portions during the evalf step, pass a value for the chop flag.

evalf(dps=15, **options)[source]

Apply evalf() to each element of self.

exp()[source]

Return the exponentiation of a square matrix.

expand(deep=True, modulus=None, power_base=True, power_exp=True, mul=True, log=True, multinomial=True, basic=True, **hints)[source]

Apply core.function.expand to each entry of the matrix.

Examples

>>> Matrix(1, 1, [x*(x+1)])
Matrix([[x*(x + 1)]])
>>> _.expand()
Matrix([[x**2 + x]])
extract(rowsList, colsList)[source]

Return a submatrix by specifying a list of rows and columns. Negative indices can be given. All indices must be in the range -n <= i < n where n is the number of rows or columns.

Examples

>>> m = Matrix(4, 3, range(12))
>>> m
Matrix([
[0,  1,  2],
[3,  4,  5],
[6,  7,  8],
[9, 10, 11]])
>>> m.extract([0, 1, 3], [0, 1])
Matrix([
[0,  1],
[3,  4],
[9, 10]])

Rows or columns can be repeated:

>>> m.extract([0, 0, 1], [-1])
Matrix([
[2],
[2],
[5]])

Every other row can be taken by using range to provide the indices:

>>> m.extract(range(0, m.rows, 2), [-1])
Matrix([
[2],
[8]])
property free_symbols

Returns the free symbols within the matrix.

Examples

>>> Matrix([[x], [1]]).free_symbols
{x}
get_diag_blocks()[source]

Obtains the square sub-matrices on the main diagonal of a square matrix.

Useful for inverting symbolic matrices or solving systems of linear equations which may be decoupled by having a block diagonal structure.

Examples

>>> A = Matrix([[1, 3, 0, 0], [y, z*z, 0, 0], [0, 0, x, 0], [0, 0, 0, 0]])
>>> a1, a2, a3 = A.get_diag_blocks()
>>> a1
Matrix([
[1,    3],
[y, z**2]])
>>> a2
Matrix([[x]])
>>> a3
Matrix([[0]])
has(*patterns)[source]

Test whether any subexpression matches any of the patterns.

Examples

>>> A = Matrix(((1, x), (0.2, 3)))
>>> A.has(x)
True
>>> A.has(y)
False
>>> A.has(Float)
True
classmethod hstack(*args)[source]

Return a matrix formed by joining args horizontally (i.e. by repeated application of row_join).

Examples

>>> Matrix.hstack(eye(2), 2*eye(2))
Matrix([
[1, 0, 2, 0],
[0, 1, 0, 2]])
integrate(*args)[source]

Integrate each element of the matrix.

Examples

>>> M = Matrix([[x, y], [1, 0]])
>>> M.integrate(x)
Matrix([
[x**2/2, x*y],
[     x,   0]])
>>> M.integrate((x, 0, 2))
Matrix([
[2, 2*y],
[2,   0]])

See also

limit, diff

inv(method=None, **kwargs)[source]

Returns the inverse of the matrix.

Parameters:

method ({‘GE’, ‘LU’, ‘ADJ’, ‘CH’, ‘LDL’} or None) – Selects algorithm for inversion. For dense matrices available {‘GE’, ‘LU’, ‘ADJ’}, default is ‘GE’. For sparse: {‘CH’, ‘LDL’}, default is ‘LDL’.

Raises:

ValueError – If the determinant of the matrix is zero.

inv_mod(m)[source]

Returns the inverse of the matrix \(K\) (mod \(m\)), if it exists.

Method to find the matrix inverse of \(K\) (mod \(m\)) implemented in this function:

  • Compute \(\mathrm{adj}(K) = \mathrm{cof}(K)^t\), the adjoint matrix of \(K\).

  • Compute \(r = 1/\mathrm{det}(K) \pmod m\).

  • \(K^{-1} = r\cdot \mathrm{adj}(K) \pmod m\).

Examples

>>> A = Matrix(2, 2, [1, 2, 3, 4])
>>> A.inv_mod(5)
Matrix([
[3, 1],
[4, 2]])
>>> A.inv_mod(3)
Matrix([
[1, 1],
[0, 1]])
inverse_ADJ(iszerofunc=<function _iszero>)[source]

Calculates the inverse using the adjugate matrix and a determinant.

inverse_GE(iszerofunc=<function _iszero>)[source]

Calculates the inverse using Gaussian elimination.

inverse_LU(iszerofunc=<function _iszero>)[source]

Calculates the inverse using LU decomposition.

is_anti_symmetric(simplify=True)[source]

Check if matrix M is an antisymmetric matrix, that is, M is a square matrix with all M[i, j] == -M[j, i].

When simplify=True (default), the sum M[i, j] + M[j, i] is simplified before testing to see if it is zero. By default, the Diofant simplify function is used. To use a custom function set simplify to a function that accepts a single argument which returns a simplified expression. To skip simplification, set simplify to False but note that although this will be faster, it may induce false negatives.

Examples

>>> m = Matrix(2, 2, [0, 1, -1, 0])
>>> m
Matrix([
[ 0, 1],
[-1, 0]])
>>> m.is_anti_symmetric()
True
>>> m = Matrix(2, 3, [0, 0, x, -y, 0, 0])
>>> m
Matrix([
[ 0, 0, x],
[-y, 0, 0]])
>>> m.is_anti_symmetric()
False
>>> m = Matrix(3, 3, [0, x**2 + 2*x + 1, y,
...                   -(x + 1)**2, 0, x*y,
...                   -y, -x*y, 0])

Simplification of matrix elements is done by default so even though two elements which should be equal and opposite wouldn’t pass an equality test, the matrix is still reported as anti-symmetric:

>>> m[0, 1] == -m[1, 0]
False
>>> m.is_anti_symmetric()
True

If ‘simplify=False’ is used for the case when a Matrix is already simplified, this will speed things up. Here, we see that without simplification the matrix does not appear anti-symmetric:

>>> m.is_anti_symmetric(simplify=False)
False

But if the matrix were already expanded, then it would appear anti-symmetric and simplification in the is_anti_symmetric routine is not needed:

>>> m = m.expand()
>>> m.is_anti_symmetric(simplify=False)
True
is_diagonal()[source]

Check if matrix is diagonal, that is matrix in which the entries outside the main diagonal are all zero.

Examples

>>> m = Matrix(2, 2, [1, 0, 0, 2])
>>> m
Matrix([
[1, 0],
[0, 2]])
>>> m.is_diagonal()
True
>>> m = Matrix(2, 2, [1, 1, 0, 2])
>>> m
Matrix([
[1, 1],
[0, 2]])
>>> m.is_diagonal()
False
>>> m = diag(1, 2, 3)
>>> m
Matrix([
[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
>>> m.is_diagonal()
True
is_diagonalizable(reals_only=False, clear_subproducts=True)[source]

Check if matrix is diagonalizable.

If reals_only==True then check that diagonalized matrix consists of the only not complex values.

Some subproducts could be used further in other methods to avoid double calculations, By default (if clear_subproducts==True) they will be deleted.

Examples

>>> m = Matrix(3, 3, [1, 2, 0, 0, 3, 0, 2, -4, 2])
>>> m
Matrix([
[1,  2, 0],
[0,  3, 0],
[2, -4, 2]])
>>> m.is_diagonalizable()
True
>>> m = Matrix(2, 2, [0, 1, 0, 0])
>>> m
Matrix([
[0, 1],
[0, 0]])
>>> m.is_diagonalizable()
False
>>> m = Matrix(2, 2, [0, 1, -1, 0])
>>> m
Matrix([
[ 0, 1],
[-1, 0]])
>>> m.is_diagonalizable()
True
>>> m.is_diagonalizable(True)
False
property is_hermitian

Checks if the matrix is Hermitian.

In a Hermitian matrix element i,j is the complex conjugate of element j,i.

Examples

>>> a = Matrix([[1, I], [-I, 1]])
>>> a
Matrix([
[ 1, I],
[-I, 1]])
>>> a.is_hermitian
True
>>> a[0, 0] = 2*I
>>> a.is_hermitian
False
>>> a[0, 0] = x
>>> a.is_hermitian
>>> a[0, 1] = a[1, 0]*I
>>> a.is_hermitian
False
property is_lower

Check if matrix is a lower triangular matrix. True can be returned even if the matrix is not square.

Examples

>>> m = Matrix(2, 2, [1, 0, 0, 1])
>>> m
Matrix([
[1, 0],
[0, 1]])
>>> m.is_lower
True
>>> m = Matrix(4, 3, [0, 0, 0, 2, 0, 0, 1, 4, 0, 6, 6, 5])
>>> m
Matrix([
[0, 0, 0],
[2, 0, 0],
[1, 4, 0],
[6, 6, 5]])
>>> m.is_lower
True
>>> m = Matrix(2, 2, [x**2 + y, y**2 + x, 0, x + y])
>>> m
Matrix([
[x**2 + y, x + y**2],
[       0,    x + y]])
>>> m.is_lower
False
property is_lower_hessenberg

Checks if the matrix is in the lower-Hessenberg form.

The lower hessenberg matrix has zero entries above the first superdiagonal.

Examples

>>> a = Matrix([[1, 2, 0, 0], [5, 2, 3, 0], [3, 4, 3, 7], [5, 6, 1, 1]])
>>> a
Matrix([
[1, 2, 0, 0],
[5, 2, 3, 0],
[3, 4, 3, 7],
[5, 6, 1, 1]])
>>> a.is_lower_hessenberg
True
is_nilpotent()[source]

Checks if a matrix is nilpotent.

A matrix B is nilpotent if for some integer k, B**k is a zero matrix.

Examples

>>> a = Matrix([[0, 0, 0], [1, 0, 0], [1, 1, 0]])
>>> a.is_nilpotent()
True
>>> a = Matrix([[1, 0, 1], [1, 0, 0], [1, 1, 0]])
>>> a.is_nilpotent()
False
property is_square

Checks if a matrix is square.

A matrix is square if the number of rows equals the number of columns. The empty matrix is square by definition, since the number of rows and the number of columns are both zero.

Examples

>>> a = Matrix([[1, 2, 3], [4, 5, 6]])
>>> b = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> c = Matrix([])
>>> a.is_square
False
>>> b.is_square
True
>>> c.is_square
True
is_symbolic()[source]

Checks if any elements contain Symbols.

Examples

>>> M = Matrix([[x, y], [1, 0]])
>>> M.is_symbolic()
True
is_symmetric(simplify=True)[source]

Check if matrix is symmetric matrix, that is square matrix and is equal to its transpose.

By default, simplifications occur before testing symmetry. They can be skipped using ‘simplify=False’; while speeding things a bit, this may however induce false negatives.

Examples

>>> m = Matrix(2, 2, [0, 1, 1, 2])
>>> m
Matrix([
[0, 1],
[1, 2]])
>>> m.is_symmetric()
True
>>> m = Matrix(2, 2, [0, 1, 2, 0])
>>> m
Matrix([
[0, 1],
[2, 0]])
>>> m.is_symmetric()
False
>>> m = Matrix(2, 3, [0, 0, 0, 0, 0, 0])
>>> m
Matrix([
[0, 0, 0],
[0, 0, 0]])
>>> m.is_symmetric()
False
>>> m = Matrix(3, 3, [1, x**2 + 2*x + 1, y, (x + 1)**2, 2, 0, y, 0, 3])
>>> m
Matrix([
[         1, x**2 + 2*x + 1, y],
[(x + 1)**2,              2, 0],
[         y,              0, 3]])
>>> m.is_symmetric()
True

If the matrix is already simplified, you may speed-up is_symmetric() test by using ‘simplify=False’.

>>> m.is_symmetric(simplify=False)
False
>>> m1 = m.expand()
>>> m1.is_symmetric(simplify=False)
True
property is_upper

Check if matrix is an upper triangular matrix. True can be returned even if the matrix is not square.

Examples

>>> m = Matrix(2, 2, [1, 0, 0, 1])
>>> m
Matrix([
[1, 0],
[0, 1]])
>>> m.is_upper
True
>>> m = Matrix(4, 3, [5, 1, 9, 0, 4, 6, 0, 0, 5, 0, 0, 0])
>>> m
Matrix([
[5, 1, 9],
[0, 4, 6],
[0, 0, 5],
[0, 0, 0]])
>>> m.is_upper
True
>>> m = Matrix(2, 3, [4, 2, 5, 6, 1, 1])
>>> m
Matrix([
[4, 2, 5],
[6, 1, 1]])
>>> m.is_upper
False
property is_upper_hessenberg

Checks if the matrix is the upper-Hessenberg form.

The upper hessenberg matrix has zero entries below the first subdiagonal.

Examples

>>> a = Matrix([[1, 4, 2, 3], [3, 4, 1, 7], [0, 2, 3, 4], [0, 0, 1, 3]])
>>> a
Matrix([
[1, 4, 2, 3],
[3, 4, 1, 7],
[0, 2, 3, 4],
[0, 0, 1, 3]])
>>> a.is_upper_hessenberg
True
property is_zero

Checks if a matrix is a zero matrix.

A matrix is zero if every element is zero. A matrix need not be square to be considered zero. The empty matrix is zero by the principle of vacuous truth. For a matrix that may or may not be zero (e.g. contains a symbol), this will be None

Examples

>>> a = Matrix([[0, 0], [0, 0]])
>>> b = zeros(3, 4)
>>> c = Matrix([[0, 1], [0, 0]])
>>> d = Matrix([])
>>> e = Matrix([[x, 0], [0, 0]])
>>> a.is_zero
True
>>> b.is_zero
True
>>> c.is_zero
False
>>> d.is_zero
True
>>> e.is_zero
jacobian(X)[source]

Calculates the Jacobian matrix (derivative of a vectorial function).

Parameters:
  • self (vector of expressions representing functions f_i(x_1, …, x_n).)

  • X (set of x_i’s in order, it can be a list or a Matrix)

  • Both self and X can be a row or a column matrix in any order

  • (i.e., jacobian() should always work).

Examples

>>> from diofant.abc import phi, rho
>>> X = Matrix([rho*cos(phi), rho*sin(phi), rho**2])
>>> Y = Matrix([rho, phi])
>>> X.jacobian(Y)
Matrix([
[cos(phi), -rho*sin(phi)],
[sin(phi),  rho*cos(phi)],
[   2*rho,             0]])
>>> X = Matrix([rho*cos(phi), rho*sin(phi)])
>>> X.jacobian(Y)
Matrix([
[cos(phi), -rho*sin(phi)],
[sin(phi),  rho*cos(phi)]])
jordan_cells(calc_transformation=True)[source]

Return a list of Jordan cells of current matrix. This list shape Jordan matrix J.

If calc_transformation=False, then transformation P such that

\[J = P^{-1} \cdot M \cdot P\]

will not be calculated.

Examples

>>> m = Matrix([[+6,  5, -2, -3],
...             [-3, -1,  3,  3],
...             [+2,  1, -2, -3],
...             [-1,  1,  5,  5]])
>>> Jcells, P = m.jordan_cells()
>>> Jcells[0]
Matrix([
[2, 1],
[0, 2]])
>>> Jcells[1]
Matrix([
[2, 1],
[0, 2]])

See also

jordan_form

jordan_form(calc_transformation=True)[source]

Return Jordan form J of current matrix.

Also (if calc_transformation=True) the transformation P such that

\[J = P^{-1} \cdot M \cdot P\]

and the jordan blocks forming J will be calculated.

Examples

>>> m = Matrix([[+6,  5, -2, -3],
...             [-3, -1,  3,  3],
...             [+2,  1, -2, -3],
...             [-1,  1,  5,  5]])
>>> J, P = m.jordan_form()
>>> J
Matrix([
[2, 1, 0, 0],
[0, 2, 0, 0],
[0, 0, 2, 1],
[0, 0, 0, 2]])

See also

jordan_cells

key2bounds(keys)[source]

Converts a key with potentially mixed types of keys (integer and slice) into a tuple of ranges and raises an error if any index is out of self’s range.

See also

key2ij

key2ij(key)[source]

Converts key into canonical form, converting integers or indexable items into valid integers for self’s range or returning slices unchanged.

See also

key2bounds

limit(*args)[source]

Calculate the limit of each element in the matrix.

Examples

>>> M = Matrix([[x, y], [1, 0]])
>>> M.limit(x, 2)
Matrix([
[2, y],
[1, 0]])

See also

integrate, diff

lower_triangular_solve(rhs)[source]

Solves Ax = B, where A is a lower triangular matrix.

minorEntry(i, j, method='berkowitz')[source]

Calculate the minor of an element.

minorMatrix(i, j)[source]

Creates the minor matrix of a given element.

multiply(b)[source]

Returns self*b

multiply_elementwise(b)[source]

Return the Hadamard product (elementwise product) of A and B

Examples

>>> A = Matrix([[0, 1, 2], [3, 4, 5]])
>>> B = Matrix([[1, 10, 100], [100, 10, 1]])
>>> A.multiply_elementwise(B)
Matrix([
[  0, 10, 200],
[300, 40,   5]])

See also

cross, dot, multiply

norm(ord=None)[source]

Return the Norm of a Matrix or Vector. In the simplest case this is the geometric size of the vector Other norms can be specified by the ord parameter

ord

norm for matrices

norm for vectors

None

Frobenius norm

2-norm

‘fro’

Frobenius norm

  • does not exist

inf

max(abs(x))

-inf

min(abs(x))

1

as below

-1

as below

2

2-norm (largest sing. value)

as below

-2

smallest singular value

as below

other

  • does not exist

sum(abs(x)**ord)**(1./ord)

Examples

>>> x = Symbol('x', real=True)
>>> v = Matrix([cos(x), sin(x)])
>>> trigsimp(v.norm())
1
>>> v.norm(10)
root(sin(x)**10 + cos(x)**10, 10)
>>> A = Matrix([[1, 1], [1, 1]])
>>> A.norm(2)  # Spectral norm (max of |Ax|/|x| under 2-vector-norm)
2
>>> A.norm(-2)  # Inverse spectral norm (smallest singular value)
0
>>> A.norm()  # Frobenius Norm
2
>>> Matrix([1, -2]).norm(oo)
2
>>> Matrix([-1, 2]).norm(-oo)
1

See also

normalized

normalized()[source]

Return the normalized version of self.

See also

norm

nullspace(simplify=False, iszerofunc=<function _iszero>)[source]

Returns list of vectors (Matrix objects) that span nullspace of self.

permuteBkwd(perm)[source]

Permute the rows of the matrix with the given permutation in reverse.

Examples

>>> M = eye(3)
>>> M.permuteBkwd([[0, 1], [0, 2]])
Matrix([
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]])

See also

permuteFwd

permuteFwd(perm)[source]

Permute the rows of the matrix with the given permutation.

Examples

>>> M = eye(3)
>>> M.permuteFwd([[0, 1], [0, 2]])
Matrix([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])

See also

permuteBkwd

pinv()[source]

Calculate the Moore-Penrose pseudoinverse of the matrix.

The Moore-Penrose pseudoinverse exists and is unique for any matrix. If the matrix is invertible, the pseudoinverse is the same as the inverse.

Examples

>>> Matrix([[1, 2, 3], [4, 5, 6]]).pinv()
Matrix([
[-17/18,  4/9],
[  -1/9,  1/9],
[ 13/18, -2/9]])

References

pinv_solve(B, arbitrary_matrix=None)[source]

Solve Ax = B using the Moore-Penrose pseudoinverse.

There may be zero, one, or infinite solutions. If one solution exists, it will be returned. If infinite solutions exist, one will be returned based on the value of arbitrary_matrix. If no solutions exist, the least-squares solution is returned.

Parameters:
  • B (Matrix) – The right hand side of the equation to be solved for. Must have the same number of rows as matrix A.

  • arbitrary_matrix (Matrix) – If the system is underdetermined (e.g. A has more columns than rows), infinite solutions are possible, in terms of an arbitrary matrix. This parameter may be set to a specific matrix to use for that purpose; if so, it must be the same shape as x, with as many rows as matrix A has columns, and as many columns as matrix B. If left as None, an appropriate matrix containing dummy symbols in the form of wn_m will be used, with n and m being row and column position of each symbol.

Returns:

x (Matrix) – The matrix that will satisfy Ax = B. Will have as many rows as matrix A has columns, and as many columns as matrix B.

Examples

>>> A = Matrix([[1, 2, 3], [4, 5, 6]])
>>> B = Matrix([7, 8])
>>> A.pinv_solve(B)
Matrix([
[ _w0_0/6 - _w1_0/3 + _w2_0/6 - 55/18],
[-_w0_0/3 + 2*_w1_0/3 - _w2_0/3 + 1/9],
[ _w0_0/6 - _w1_0/3 + _w2_0/6 + 59/18]])
>>> A.pinv_solve(B, arbitrary_matrix=Matrix([0, 0, 0]))
Matrix([
[-55/18],
[   1/9],
[ 59/18]])

Notes

This may return either exact solutions or least squares solutions. To determine which, check A * A.pinv() * B == B. It will be True if exact solutions exist, and False if only a least-squares solution exists. Be aware that the left hand side of that equation may need to be simplified to correctly compare to the right hand side.

References

print_nonzero(symb='X')[source]

Shows location of non-zero entries for fast shape lookup.

Examples

>>> m = Matrix(2, 3, lambda i, j: i*3+j)
>>> m
Matrix([
[0, 1, 2],
[3, 4, 5]])
>>> m.print_nonzero()
[ XX]
[XXX]
>>> m = eye(4)
>>> m.print_nonzero('x')
[x   ]
[ x  ]
[  x ]
[   x]
project(v)[source]

Return the projection of self onto the line containing v.

Examples

>>> V = Matrix([sqrt(3)/2, Rational(1, 2)])
>>> x = Matrix([[1, 0]])
>>> V.project(x)
Matrix([[sqrt(3)/2, 0]])
>>> V.project(-x)
Matrix([[sqrt(3)/2, 0]])
rank(iszerofunc=<function _iszero>, simplify=False)[source]

Returns the rank of a matrix

>>> m = Matrix([[1, 2], [x, 1 - 1/x]])
>>> m.rank()
2
>>> n = Matrix(3, 3, range(1, 10))
>>> n.rank()
2
replace(F, G, exact=False)[source]

Replaces Function F in Matrix entries with Function G.

Examples

>>> F, G = symbols('F, G', cls=Function)
>>> M = Matrix(2, 2, lambda i, j: F(i+j))
>>> M
Matrix([
[F(0), F(1)],
[F(1), F(2)]])
>>> N = M.replace(F, G)
>>> N
Matrix([
[G(0), G(1)],
[G(1), G(2)]])
row_insert(pos, mti)[source]

Insert one or more rows at the given row position.

Examples

>>> M = zeros(3)
>>> V = ones(1, 3)
>>> M.row_insert(1, V)
Matrix([
[0, 0, 0],
[1, 1, 1],
[0, 0, 0],
[0, 0, 0]])

See also

col_insert

row_join(rhs)[source]

Concatenates two matrices along self’s last and rhs’s first column

Examples

>>> M = zeros(3)
>>> V = ones(3, 1)
>>> M.row_join(V)
Matrix([
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1]])

See also

col_join

rref(iszerofunc=<function _iszero>, simplify=False)[source]

Return reduced row-echelon form of matrix and indices of pivot vars.

To simplify elements before finding nonzero pivots set simplify=True (to use the default Diofant simplify function) or pass a custom simplify function.

Examples

>>> m = Matrix([[1, 2], [x, 1 - 1/x]])
>>> m.rref()
(Matrix([
[1, 0],
[0, 1]]), [0, 1])
property shape

The shape (dimensions) of the matrix as the 2-tuple (rows, cols).

Examples

>>> M = zeros(2, 3)
>>> M.shape
(2, 3)
>>> M.rows
2
>>> M.cols
3
simplify(ratio=1.7, measure=<function count_ops>)[source]

Apply simplify to each element of the matrix.

Examples

>>> SparseMatrix(1, 1, [x*sin(y)**2 + x*cos(y)**2])
Matrix([[x*sin(y)**2 + x*cos(y)**2]])
>>> _.simplify()
Matrix([[x]])
singular_values()[source]

Compute the singular values of a Matrix

Examples

>>> x = Symbol('x', real=True)
>>> A = Matrix([[0, 1, 0], [0, x, 0], [-1, 0, 0]])
>>> A.singular_values()
[sqrt(x**2 + 1), 1, 0]

See also

condition_number

solve_least_squares(rhs, method='CH')[source]

Return the least-square fit to the data.

By default the cholesky_solve routine is used (method=’CH’); other methods of matrix inversion can be used.

Examples

>>> A = Matrix([1, 2, 3])
>>> B = Matrix([2, 3, 4])
>>> S = Matrix(A.row_join(B))
>>> S
Matrix([
[1, 2],
[2, 3],
[3, 4]])

If each line of S represent coefficients of Ax + By and x and y are [2, 3] then S*xy is:

>>> r = S*Matrix([2, 3])
>>> r
Matrix([
[ 8],
[13],
[18]])

But let’s add 1 to the middle value and then solve for the least-squares value of xy:

>>> xy = S.solve_least_squares(Matrix([8, 14, 18]))
>>> xy
Matrix([
[ 5/3],
[10/3]])

The error is given by S*xy - r:

>>> S*xy - r
Matrix([
[1/3],
[1/3],
[1/3]])
>>> _.norm().evalf(2)
0.58

If a different xy is used, the norm will be higher:

>>> xy += ones(2, 1)/10
>>> (S*xy - r).norm().evalf(2)
1.5

See also

inv

subs(*args, **kwargs)[source]

Return a new matrix with subs applied to each entry.

Examples

>>> SparseMatrix(1, 1, [x])
Matrix([[x]])
>>> _.subs({x: y})
Matrix([[y]])
>>> Matrix(_).subs({y: x})
Matrix([[x]])
table(printer, rowstart='[', rowend=']', rowsep='\n', colsep=', ', align='right')[source]

String form of Matrix as a table.

printer is the printer to use for on the elements (generally something like StrPrinter())

rowstart is the string used to start each row (by default ‘[‘).

rowend is the string used to end each row (by default ‘]’).

rowsep is the string used to separate rows (by default a newline).

colsep is the string used to separate columns (by default ‘, ‘).

align defines how the elements are aligned. Must be one of ‘left’, ‘right’, or ‘center’. You can also use ‘<’, ‘>’, and ‘^’ to mean the same thing, respectively.

This is used by the string printer for Matrix.

Examples

>>> from diofant.printing.str import StrPrinter
>>> M = Matrix([[1, 2], [-33, 4]])
>>> printer = StrPrinter()
>>> M.table(printer)
'[  1, 2]\n[-33, 4]'
>>> print(M.table(printer))
[  1, 2]
[-33, 4]
>>> print(M.table(printer, rowsep=',\n'))
[  1, 2],
[-33, 4]
>>> print(M.table(printer, colsep=' '))
[  1 2]
[-33 4]
>>> print(M.table(printer, align='center'))
[ 1 , 2]
[-33, 4]
>>> print(M.table(printer, rowstart='{', rowend='}'))
{  1, 2}
{-33, 4}
trace()[source]

Returns the trace of a matrix.

transpose()[source]

Matrix transposition.

upper_triangular_solve(rhs)[source]

Solves Ax = B, where A is an upper triangular matrix.

values()[source]

Return non-zero values of self.

vec()[source]

Return the Matrix converted into a one column matrix by stacking columns

Examples

>>> m = Matrix([[1, 3], [2, 4]])
>>> m
Matrix([
[1, 3],
[2, 4]])
>>> m.vec()
Matrix([
[1],
[2],
[3],
[4]])

See also

vech

vech(diagonal=True, check_symmetry=True)[source]

Return the unique elements of a symmetric Matrix as a one column matrix by stacking the elements in the lower triangle.

Arguments: diagonal – include the diagonal cells of self or not check_symmetry – checks symmetry of self but not completely reliably

Examples

>>> m = Matrix([[1, 2], [2, 3]])
>>> m
Matrix([
[1, 2],
[2, 3]])
>>> m.vech()
Matrix([
[1],
[2],
[3]])
>>> m.vech(diagonal=False)
Matrix([[2]])

See also

vec

classmethod vstack(*args)[source]

Return a matrix formed by joining args vertically (i.e. by repeated application of col_join).

Examples

>>> Matrix.vstack(eye(2), 2*eye(2))
Matrix([
[1, 0],
[0, 1],
[2, 0],
[0, 2]])
xreplace(rule)[source]

Return a new matrix with xreplace applied to each entry.

Examples

>>> SparseMatrix(1, 1, [x])
Matrix([[x]])
>>> _.xreplace({x: y})
Matrix([[y]])
>>> Matrix(_).xreplace({y: x})
Matrix([[x]])

Matrix Exceptions Reference

class diofant.matrices.matrices.MatrixError[source]

Generic matrix error.

class diofant.matrices.matrices.ShapeError[source]

Wrong matrix shape.

class diofant.matrices.matrices.NonSquareMatrixError[source]

Raised when a square matrix is expected.

Matrix Functions Reference

diofant.matrices.matrices.classof(A, B)[source]

Get the type of the result when combining matrices of different types.

Currently the strategy is that immutability is contagious.

Examples

>>> M = Matrix([[1, 2], [3, 4]])  # a Mutable Matrix
>>> IM = ImmutableMatrix([[1, 2], [3, 4]])
>>> classof(M, IM)
<class 'diofant.matrices.immutable.ImmutableMatrix'>
diofant.matrices.dense.matrix_multiply_elementwise(A, B)[source]

Return the Hadamard product (elementwise product) of A and B

>>> A = Matrix([[0, 1, 2], [3, 4, 5]])
>>> B = Matrix([[1, 10, 100], [100, 10, 1]])
>>> matrix_multiply_elementwise(A, B)
Matrix([
[  0, 10, 200],
[300, 40,   5]])
diofant.matrices.dense.zeros(r, c=None, cls=None)[source]

Returns a matrix of zeros with r rows and c columns; if c is omitted a square matrix will be returned.

diofant.matrices.dense.ones(r, c=None)[source]

Returns a matrix of ones with r rows and c columns; if c is omitted a square matrix will be returned.

diofant.matrices.dense.eye(n, cls=None)[source]

Create square identity matrix n x n

diofant.matrices.dense.diag(*values, **kwargs)[source]

Create a sparse, diagonal matrix from a list of diagonal values.

Notes

When arguments are matrices they are fitted in resultant matrix.

The returned matrix is a mutable, dense matrix. To make it a different type, send the desired class for keyword cls.

Examples

>>> diag(1, 2, 3)
Matrix([
[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
>>> diag(*[1, 2, 3])
Matrix([
[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])

The diagonal elements can be matrices; diagonal filling will continue on the diagonal from the last element of the matrix:

>>> a = Matrix([x, y, z])
>>> b = Matrix([[1, 2], [3, 4]])
>>> c = Matrix([[5, 6]])
>>> diag(a, 7, b, c)
Matrix([
[x, 0, 0, 0, 0, 0],
[y, 0, 0, 0, 0, 0],
[z, 0, 0, 0, 0, 0],
[0, 7, 0, 0, 0, 0],
[0, 0, 1, 2, 0, 0],
[0, 0, 3, 4, 0, 0],
[0, 0, 0, 0, 5, 6]])

When diagonal elements are lists, they will be treated as arguments to Matrix:

>>> diag([1, 2, 3], 4)
Matrix([
[1, 0],
[2, 0],
[3, 0],
[0, 4]])
>>> diag([[1, 2, 3]], 4)
Matrix([
[1, 2, 3, 0],
[0, 0, 0, 4]])

A given band off the diagonal can be made by padding with a vertical or horizontal “kerning” vector:

>>> hpad = ones(0, 2)
>>> vpad = ones(2, 0)
>>> diag(vpad, 1, 2, 3, hpad) + diag(hpad, 4, 5, 6, vpad)
Matrix([
[0, 0, 4, 0, 0],
[0, 0, 0, 5, 0],
[1, 0, 0, 0, 6],
[0, 2, 0, 0, 0],
[0, 0, 3, 0, 0]])

The type is mutable by default but can be made immutable by setting the mutable flag to False:

>>> type(diag(1))
<class 'diofant.matrices.dense.MutableDenseMatrix'>
>>> type(diag(1, cls=ImmutableMatrix))
<class 'diofant.matrices.immutable.ImmutableMatrix'>
diofant.matrices.dense.jordan_cell(eigenval, n)[source]

Create matrix of Jordan cell kind:

Examples

>>> jordan_cell(x, 4)
Matrix([
[x, 1, 0, 0],
[0, x, 1, 0],
[0, 0, x, 1],
[0, 0, 0, x]])
diofant.matrices.dense.hessian(f, varlist, constraints=[])[source]

Compute Hessian matrix for a function f wrt parameters in varlist which may be given as a sequence or a row/column vector. A list of constraints may optionally be given.

Examples

>>> f = Function('f')(x, y)
>>> g1 = Function('g')(x, y)
>>> g2 = x**2 + 3*y
>>> pprint(hessian(f, (x, y), [g1, g2]))
⎡                   ∂               ∂            ⎤
⎢     0        0    ──(g(x, y))     ──(g(x, y))  ⎥
⎢                   ∂x              ∂y           ⎥
⎢                                                ⎥
⎢     0        0        2⋅x              3       ⎥
⎢                                                ⎥
⎢                     2               2          ⎥
⎢∂                   ∂               ∂           ⎥
⎢──(g(x, y))  2⋅x   ───(f(x, y))   ─────(f(x, y))⎥
⎢∂x                   2            ∂y ∂x         ⎥
⎢                   ∂x                           ⎥
⎢                                                ⎥
⎢                     2               2          ⎥
⎢∂                   ∂               ∂           ⎥
⎢──(g(x, y))   3   ─────(f(x, y))   ───(f(x, y)) ⎥
⎢∂y                ∂y ∂x              2          ⎥
⎣                                   ∂y           ⎦

References

https://en.wikipedia.org/wiki/Hessian_matrix

diofant.matrices.dense.GramSchmidt(vlist, orthonormal=False)[source]

Apply the Gram-Schmidt process to a set of vectors.

see: https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process

diofant.matrices.dense.wronskian(functions, var, method='bareiss')[source]

Compute Wronskian for [] of functions

                 | f1       f2        ...   fn      |
                 | f1'      f2'       ...   fn'     |
                 |  .        .        .      .      |
W(f1, ..., fn) = |  .        .         .     .      |
                 |  .        .          .    .      |
                 |  (n)      (n)            (n)     |
                 | D   (f1) D   (f2)  ...  D   (fn) |

see: https://en.wikipedia.org/wiki/Wronskian

diofant.matrices.dense.casoratian(seqs, n, zero=True)[source]

Given linear difference operator L of order ‘k’ and homogeneous equation Ly = 0 we want to compute kernel of L, which is a set of ‘k’ sequences: a(n), b(n), … z(n).

Solutions of L are linearly independent iff their Casoratian, denoted as C(a, b, …, z), do not vanish for n = 0.

Casoratian is defined by k x k determinant:

+  a(n)     b(n)     . . . z(n)     +
|  a(n+1)   b(n+1)   . . . z(n+1)   |
|    .         .     .        .     |
|    .         .       .      .     |
|    .         .         .    .     |
+  a(n+k-1) b(n+k-1) . . . z(n+k-1) +

It proves very useful in rsolve_hyper() where it is applied to a generating set of a recurrence to factor out linearly dependent solutions and return a basis:

>>> n = Symbol('n', integer=True)

Exponential and factorial are linearly independent:

>>> casoratian([2**n, factorial(n)], n) != 0
True
diofant.matrices.dense.randMatrix(r, c=None, min=0, max=99, seed=None, symmetric=False, percent=100)[source]

Create random matrix with dimensions r x c. If c is omitted the matrix will be square. If symmetric is True the matrix must be square. If percent is less than 100 then only approximately the given percentage of elements will be non-zero.

Examples

>>> randMatrix(3, seed=0)
Matrix([
[49, 97, 53],
[ 5, 33, 65],
[62, 51, 38]])
>>> randMatrix(3, 2, seed=0)
Matrix([
[49, 97],
[53,  5],
[33, 65]])
>>> randMatrix(3, 3, 0, 2, seed=0)
Matrix([
[1, 1, 0],
[1, 2, 1],
[1, 1, 1]])
>>> randMatrix(3, symmetric=True, seed=0)
Matrix([
[49, 97, 53],
[97,  5, 33],
[53, 33, 65]])
>>> A = randMatrix(3, seed=1)
>>> B = randMatrix(3, seed=2)
>>> A == B
False
>>> A == randMatrix(3, seed=1)
True
>>> randMatrix(3, symmetric=True, percent=50, seed=0)
Matrix([
[ 0,  0,  5],
[33,  0,  0],
[65, 53, 33]])

Numpy Utility Functions Reference

diofant.matrices.dense.list2numpy(l, dtype=<class 'object'>)[source]

Converts python list of Diofant expressions to a NumPy array.

diofant.matrices.dense.matrix2numpy(m, dtype=<class 'object'>)[source]

Converts Diofant’s matrix to a NumPy array.

diofant.matrices.dense.rot_axis1(theta)[source]

Returns a rotation matrix for a rotation of theta (in radians) about the 1-axis.

Examples

A rotation of pi/3 (60 degrees):

>>> theta = pi/3
>>> rot_axis1(theta)
Matrix([
[1,          0,         0],
[0,        1/2, sqrt(3)/2],
[0, -sqrt(3)/2,       1/2]])

If we rotate by pi/2 (90 degrees):

>>> rot_axis1(pi/2)
Matrix([
[1,  0, 0],
[0,  0, 1],
[0, -1, 0]])

See also

diofant.matrices.dense.rot_axis2

Returns a rotation matrix for a rotation of theta (in radians) about the 2-axis

diofant.matrices.dense.rot_axis3

Returns a rotation matrix for a rotation of theta (in radians) about the 3-axis

diofant.matrices.dense.rot_axis2(theta)[source]

Returns a rotation matrix for a rotation of theta (in radians) about the 2-axis.

Examples

A rotation of pi/3 (60 degrees):

>>> theta = pi/3
>>> rot_axis2(theta)
Matrix([
[      1/2, 0, -sqrt(3)/2],
[        0, 1,          0],
[sqrt(3)/2, 0,        1/2]])

If we rotate by pi/2 (90 degrees):

>>> rot_axis2(pi/2)
Matrix([
[0, 0, -1],
[0, 1,  0],
[1, 0,  0]])

See also

diofant.matrices.dense.rot_axis1

Returns a rotation matrix for a rotation of theta (in radians) about the 1-axis

diofant.matrices.dense.rot_axis3

Returns a rotation matrix for a rotation of theta (in radians) about the 3-axis

diofant.matrices.dense.rot_axis3(theta)[source]

Returns a rotation matrix for a rotation of theta (in radians) about the 3-axis.

Examples

A rotation of pi/3 (60 degrees):

>>> theta = pi/3
>>> rot_axis3(theta)
Matrix([
[       1/2, sqrt(3)/2, 0],
[-sqrt(3)/2,       1/2, 0],
[         0,         0, 1]])

If we rotate by pi/2 (90 degrees):

>>> rot_axis3(pi/2)
Matrix([
[ 0, 1, 0],
[-1, 0, 0],
[ 0, 0, 1]])

See also

diofant.matrices.dense.rot_axis1

Returns a rotation matrix for a rotation of theta (in radians) about the 1-axis

diofant.matrices.dense.rot_axis2

Returns a rotation matrix for a rotation of theta (in radians) about the 2-axis

diofant.matrices.matrices.a2idx(j, n=None)[source]

Return integer after making positive and validating against n.