21
21
from numpy.lib import triu
22
22
from numpy.linalg import lapack_lite
23
from numpy.core.defmatrix import matrix_power
23
from numpy.matrixlib.defmatrix import matrix_power
28
28
class LinAlgError(Exception):
30
Generic Python-exception-derived object raised by linalg functions.
32
General purpose exception class, derived from Python's exception.Exception
33
class, programmatically raised in linalg functions when a Linear
34
Algebra-related condition would prevent further correct execution of the
43
>>> from numpy import linalg as LA
44
>>> LA.inv(np.zeros((2,2)))
45
Traceback (most recent call last):
46
File "<stdin>", line 1, in <module>
47
File "...linalg.py", line 350,
48
in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
49
File "...linalg.py", line 249,
51
raise LinAlgError, 'Singular matrix'
52
numpy.linalg.linalg.LinAlgError: Singular matrix
33
wrap = getattr(a, "__array_wrap__", new.__array_wrap__)
59
wrap = getattr(a, "__array_prepare__", new.__array_wrap__)
36
62
def isComplexType(t):
128
154
def tensorsolve(a, b, axes=None):
130
Solve the tensor equation a x = b for x
156
Solve the tensor equation ``a x = b`` for x.
132
It is assumed that all indices of x are summed over in the product,
133
together with the rightmost indices of a, similarly as in
134
tensordot(a, x, axes=len(b.shape)).
158
It is assumed that all indices of `x` are summed over in the product,
159
together with the rightmost indices of `a`, as is done in, for example,
160
``tensordot(a, x, axes=len(b.shape))``.
138
a : array_like, shape b.shape+Q
139
Coefficient tensor. Shape Q of the rightmost indices of a must
140
be such that a is 'square', ie., prod(Q) == prod(b.shape).
141
b : array_like, any shape
143
axes : tuple of integers
144
Axes in a to reorder to the right, before inversion.
165
Coefficient tensor, of shape ``b.shape + Q``. `Q`, a tuple, equals
166
the shape of that sub-tensor of `a` consisting of the appropriate
167
number of its rightmost indices, and must be such that
168
``prod(Q) == prod(b.shape)`` (in which sense `a` is said to be
171
Right-hand tensor, which can be of any shape.
172
axes : tuple of ints, optional
173
Axes in `a` to reorder to the right, before inversion.
145
174
If None (default), no reordering is done.
183
If `a` is singular or not 'square' (in the above sense).
153
191
>>> a = np.eye(2*3*4)
154
>>> a.shape = (2*3,4, 2,3,4)
155
>>> b = np.random.randn(2*3,4)
192
>>> a.shape = (2*3, 4, 2, 3, 4)
193
>>> b = np.random.randn(2*3, 4)
156
194
>>> x = np.linalg.tensorsolve(a, b)
187
Solve the equation ``a x = b`` for ``x``.
225
Solve a linear matrix equation, or system of linear scalar equations.
227
Computes the "exact" solution, `x`, of the well-determined, i.e., full
228
rank, linear matrix equation `ax = b`.
191
232
a : array_like, shape (M, M)
192
Input equation coefficients.
193
b : array_like, shape (M,)
194
Equation target values.
234
b : array_like, shape (M,) or (M, N)
235
Ordinate or "dependent variable" values.
198
x : array, shape (M,)
239
x : ndarray, shape (M,) or (M, N) depending on b
240
Solution to the system a x = b
208
``linalg.solve`` is a wrapper to the LAPACK http://www.netlib.org/lapack
209
routines `dgesv`_ and `zgesv`_. The solution to the system of linear
210
equations is computed using an LU decomposition with partial pivoting and
249
`solve` is a wrapper for the LAPACK routines `dgesv`_ and
250
`zgesv`_, the former being used if `a` is real-valued, the latter if
251
it is complex-valued. The solution to the system of linear equations
252
is computed using an LU decomposition [1]_ with partial pivoting and
211
253
row interchanges.
213
255
.. _dgesv: http://www.netlib.org/lapack/double/dgesv.f
215
257
.. _zgesv: http://www.netlib.org/lapack/complex16/zgesv.f
259
`a` must be square and of full-rank, i.e., all rows (or, equivalently,
260
columns) must be linearly independent; if either is not true, use
261
`lstsq` for the least-squares best "solution" of the
266
.. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando,
267
FL, Academic Press, Inc., 1980, pg. 22.
219
271
Solve the system of equations ``3 * x0 + x1 = 9`` and ``x0 + 2 * x1 = 8``:
261
313
def tensorinv(a, ind=2):
263
Find the 'inverse' of a N-d array
265
The result is an inverse corresponding to the operation
266
tensordot(a, b, ind), ie.,
268
x == tensordot(tensordot(tensorinv(a), a, ind), x, ind)
269
== tensordot(tensordot(a, tensorinv(a), ind), x, ind)
271
for all x (up to floating-point accuracy).
315
Compute the 'inverse' of an N-dimensional array.
317
The result is an inverse for `a` relative to the tensordot operation
318
``tensordot(a, b, ind)``, i. e., up to floating-point accuracy,
319
``tensordot(tensorinv(a), a, ind)`` is the "identity" tensor for the
276
Tensor to 'invert'. Its shape must 'square', ie.,
277
prod(a.shape[:ind]) == prod(a.shape[ind:])
279
How many of the first indices are involved in the inverse sum.
325
Tensor to 'invert'. Its shape must be 'square', i. e.,
326
``prod(a.shape[:ind]) == prod(a.shape[ind:])``.
328
Number of first indices that are involved in the inverse sum.
329
Must be a positive integer, default is 2.
283
b : array, shape a.shape[:ind]+a.shape[ind:]
285
Raises LinAlgError if a is singular or not square
334
`a`'s tensordot inverse, shape ``a.shape[:ind] + a.shape[ind:]``.
339
If `a` is singular or not 'square' (in the above sense).
343
tensordot, tensorsolve
289
347
>>> a = np.eye(4*6)
290
>>> a.shape = (4,6,8,3)
348
>>> a.shape = (4, 6, 8, 3)
291
349
>>> ainv = np.linalg.tensorinv(a, ind=2)
294
>>> b = np.random.randn(4,6)
352
>>> b = np.random.randn(4, 6)
295
353
>>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b))
298
356
>>> a = np.eye(4*6)
299
>>> a.shape = (24,8,3)
357
>>> a.shape = (24, 8, 3)
300
358
>>> ainv = np.linalg.tensorinv(a, ind=1)
362
430
Cholesky decomposition.
364
Return the Cholesky decomposition, :math:`A = L L^*` of a Hermitian
365
positive-definite matrix :math:`A`.
432
Return the Cholesky decomposition, `L * L.H`, of the square matrix `a`,
433
where `L` is lower-triangular and .H is the conjugate transpose operator
434
(which is the ordinary transpose if `a` is real-valued). `a` must be
435
Hermitian (symmetric if real-valued) and positive-definite. Only `L` is
369
440
a : array_like, shape (M, M)
370
Hermitian (symmetric, if it is real) and positive definite
441
Hermitian (symmetric if all elements are real), positive-definite
375
L : array_like, shape (M, M)
376
Lower-triangular Cholesky factor of A.
446
L : ndarray, or matrix object if `a` is, shape (M, M)
447
Lower-triangular Cholesky factor of a.
381
If the decomposition fails.
452
If the decomposition fails, for example, if `a` is not
385
457
The Cholesky decomposition is often used as a fast way of solving
387
.. math:: A \\mathbf{x} = \\mathbf{b}.
459
.. math:: A \\mathbf{x} = \\mathbf{b}
461
(when `A` is both Hermitian/symmetric and positive-definite).
389
463
First, we solve for :math:`\\mathbf{y}` in
393
467
and then for :math:`\\mathbf{x}` in
395
.. math:: L^* \\mathbf{x} = \\mathbf{y}.
469
.. math:: L.H \\mathbf{x} = \\mathbf{y}.
399
473
>>> A = np.array([[1,-2j],[2j,5]])
475
array([[ 1.+0.j, 0.-2.j],
400
477
>>> L = np.linalg.cholesky(A)
402
479
array([[ 1.+0.j, 0.+0.j],
403
480
[ 0.+2.j, 1.+0.j]])
404
>>> np.dot(L, L.T.conj())
481
>>> np.dot(L, L.T.conj()) # verify that L * L.H = A
405
482
array([[ 1.+0.j, 0.-2.j],
406
483
[ 0.+2.j, 5.+0.j]])
484
>>> A = [[1,-2j],[2j,5]] # what happens if A is only array_like?
485
>>> np.linalg.cholesky(A) # an ndarray object is returned
486
array([[ 1.+0.j, 0.+0.j],
488
>>> # But a matrix object is returned if A is a matrix object
489
>>> LA.cholesky(np.matrix(A))
490
matrix([[ 1.+0.j, 0.+0.j],
409
494
a, wrap = _makearray(a)
431
516
def qr(a, mode='full'):
433
Compute QR decomposition of a matrix.
518
Compute the qr factorization of a matrix.
435
Calculate the decomposition :math:`A = Q R` where Q is orthonormal
436
and R upper triangular.
520
Factor the matrix `a` as `qr`, where `q` is orthonormal
521
(:math:`dot( q_{:,i}, q_{:,j}) = \\delta_{ij}`, the Kronecker delta) and
522
`r` is upper-triangular.
440
526
a : array_like, shape (M, N)
441
Matrix to be decomposed
527
Matrix to be factored.
442
528
mode : {'full', 'r', 'economic'}
443
Determines what information is to be returned. 'full' is the default.
444
Economic mode is slightly faster if only R is needed.
529
Specifies the information to be returned. 'full' is the default.
530
mode='r' returns a "true" `r`, while 'economic' returns a "polluted"
531
`r` (albeit slightly faster; see Returns below).
449
Q : double or complex array, shape (M, K)
450
R : double or complex array, shape (K, N)
454
R : double or complex array, shape (K, N)
457
A2 : double or complex array, shape (M, N)
458
The diagonal and the upper triangle of A2 contains R,
459
while the rest of the matrix is undefined.
461
If a is a matrix, so are all the return values.
463
Raises LinAlgError if decomposition fails
537
* q : ndarray of float or complex, shape (M, K)
538
* r : ndarray of float or complex, shape (K, N)
544
* r : ndarray of float or complex, shape (K, N)
546
* If mode = 'economic':
548
* a2 : ndarray of float or complex, shape (M, N)
550
The diagonal and the upper triangle of a2 contains r,
551
while the rest of the matrix is undefined.
467
560
This is an interface to the LAPACK routines dgeqrf, zgeqrf,
468
561
dorgqr, and zungqr.
563
For more information on the qr factorization, see for example:
564
http://en.wikipedia.org/wiki/QR_factorization
566
Subclasses of `ndarray` are preserved, so if `a` is of type `matrix`,
567
all the return values will be matrices too.
472
571
>>> a = np.random.randn(9, 6)
473
572
>>> q, r = np.linalg.qr(a)
474
>>> np.allclose(a, np.dot(q, r))
573
>>> np.allclose(a, np.dot(q, r)) # a does equal qr
476
575
>>> r2 = np.linalg.qr(a, mode='r')
477
576
>>> r3 = np.linalg.qr(a, mode='economic')
478
>>> np.allclose(r, r2)
577
>>> np.allclose(r, r2) # mode='r' returns the same r as mode='full'
579
>>> # But only triu parts are guaranteed equal when mode='economic'
480
580
>>> np.allclose(r, np.triu(r3[:6,:6], k=0))
583
Example illustrating a common use of `qr`: solving of least squares
586
What are the least-squares-best `m` and `y0` in ``y = y0 + mx`` for
587
the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points
588
and you'll see that it should be y0 = 0, m = 1.) The answer is provided
589
by solving the over-determined matrix equation ``Ax = b``, where::
591
A = array([[0, 1], [1, 1], [1, 1], [2, 1]])
592
x = array([[y0], [m]])
593
b = array([[1], [0], [2], [1]])
595
If A = qr such that q is orthonormal (which is always possible via
596
Gram-Schmidt), then ``x = inv(r) * (q.T) * b``. (In numpy practice,
597
however, we simply use `lstsq`.)
599
>>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]])
605
>>> b = np.array([1, 0, 2, 1])
607
>>> p = np.dot(q.T, b)
608
>>> np.dot(LA.inv(r), p)
609
array([ 1.1e-16, 1.0e+00])
484
612
a, wrap = _makearray(a)
583
713
eig : eigenvalues and right eigenvectors of general arrays
584
eigvalsh : eigenvalues of symmetric or Hemitiean arrays.
585
eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays.
714
eigvalsh : eigenvalues of symmetric or Hermitian arrays.
715
eigh : eigenvalues and eigenvectors of symmetric/Hermitian arrays.
589
719
This is a simple interface to the LAPACK routines dgeev and zgeev
590
that sets the flags to return only the eigenvalues of general real
591
and complex arrays respectively.
593
The number w is an eigenvalue of a if there exists a vector v
594
satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of
595
the characteristic equation det(a - w[i]*I) = 0, where det is the
596
determinant and I is the identity matrix.
720
that sets those routines' flags to return only the eigenvalues of
721
general real and complex arrays, respectively.
725
Illustration, using the fact that the eigenvalues of a diagonal matrix
726
are its diagonal elements, that multiplying a matrix on the left
727
by an orthogonal matrix, `Q`, and on the right by `Q.T` (the transpose
728
of `Q`), preserves the eigenvalues of the "middle" matrix. In other words,
729
if `Q` is orthogonal, then ``Q * A * Q.T`` has the same eigenvalues as
732
>>> from numpy import linalg as LA
733
>>> x = np.random.random()
734
>>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]])
735
>>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :])
738
Now multiply a diagonal matrix by Q on one side and by Q.T on the other:
740
>>> D = np.diag((-1,1))
744
>>> A = np.dot(A, Q.T)
599
749
a, wrap = _makearray(a)
643
793
def eigvalsh(a, UPLO='L'):
645
Compute the eigenvalues of a Hermitean or real symmetric matrix.
795
Compute the eigenvalues of a Hermitian or real symmetric matrix.
797
Main difference from eigh: the eigenvectors are not computed.
649
801
a : array_like, shape (M, M)
650
A complex or real matrix whose eigenvalues and eigenvectors
802
A complex- or real-valued matrix whose eigenvalues are to be
652
804
UPLO : {'L', 'U'}, optional
653
Specifies whether the calculation is done with data from the
654
lower triangular part of `a` ('L', default) or the upper triangular
805
Specifies whether the calculation is done with the lower triangular
806
part of `a` ('L', default) or the upper triangular part ('U').
659
810
w : ndarray, shape (M,)
660
The eigenvalues, each repeated according to its multiplicity.
661
They are not necessarily ordered.
811
The eigenvalues, not necessarily ordered, each repeated according to
670
eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays.
821
eigh : eigenvalues and eigenvectors of symmetric/Hermitian arrays.
671
822
eigvals : eigenvalues of general real or complex arrays.
672
eig : eigenvalues and eigenvectors of general real or complex arrays.
823
eig : eigenvalues and right eigenvectors of general real or complex
676
This is a simple interface to the LAPACK routines dsyevd and
677
zheevd that sets the flags to return only the eigenvalues of real
678
symmetric and complex Hermetian arrays respectively.
828
This is a simple interface to the LAPACK routines dsyevd and zheevd
829
that sets those routines' flags to return only the eigenvalues of
830
real symmetric and complex Hermitian arrays, respectively.
680
The number w is an eigenvalue of a if there exists a vector v
681
satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of
682
the characteristic equation det(a - w[i]*I) = 0, where det is the
683
determinant and I is the identity matrix.
834
>>> from numpy import linalg as LA
835
>>> a = np.array([[1, -2j], [2j, 5]])
837
array([ 0.17157288+0.j, 5.82842712+0.j])
686
840
a, wrap = _makearray(a)
736
Compute eigenvalues and right eigenvectors of an array.
890
Compute the eigenvalues and right eigenvectors of a square array.
740
894
a : array_like, shape (M, M)
741
A complex or real 2-D array.
895
A square array of real or complex elements.
745
899
w : ndarray, shape (M,)
746
900
The eigenvalues, each repeated according to its multiplicity.
747
901
The eigenvalues are not necessarily ordered, nor are they
748
necessarily real for real matrices.
902
necessarily real for real arrays (though for real arrays
903
complex-valued eigenvalues should occur in conjugate pairs).
749
905
v : ndarray, shape (M, M)
750
The normalized eigenvector corresponding to the eigenvalue ``w[i]`` is
751
the column ``v[:,i]``.
906
The normalized (unit "length") eigenvectors, such that the
907
column ``v[:,i]`` is the eigenvector corresponding to the
760
eigvalsh : eigenvalues of symmetric or Hemitiean arrays.
761
eig : eigenvalues and right eigenvectors for non-symmetric arrays
762
eigvals : eigenvalues of non-symmetric array.
917
eigvalsh : eigenvalues of a symmetric or Hermitian (conjugate symmetric)
920
eigvals : eigenvalues of a non-symmetric array.
766
924
This is a simple interface to the LAPACK routines dgeev and zgeev
767
that compute the eigenvalues and eigenvectors of general real and
768
complex arrays respectively.
925
which compute the eigenvalues and eigenvectors of, respectively,
926
general real- and complex-valued square arrays.
770
The number `w` is an eigenvalue of a if there exists a vector `v`
771
satisfying the equation ``dot(a,v) = w*v``. Alternately, if `w` is
772
a root of the characteristic equation ``det(a - w[i]*I) = 0``, where
773
`det` is the determinant and `I` is the identity matrix. The arrays
774
`a`, `w`, and `v` satisfy the equation ``dot(a,v[i]) = w[i]*v[:,i]``.
928
The number `w` is an eigenvalue of `a` if there exists a vector
929
`v` such that ``dot(a,v) = w * v``. Thus, the arrays `a`, `w`, and
930
`v` satisfy the equations ``dot(a[i,:], v[i]) = w[i] * v[:,i]``
931
for :math:`i \\in \\{0,...,M-1\\}`.
776
933
The array `v` of eigenvectors may not be of maximum rank, that is, some
777
of the columns may be dependent, although roundoff error may
934
of the columns may be linearly dependent, although round-off error may
778
935
obscure that fact. If the eigenvalues are all different, then theoretically
779
the eigenvectors are independent. Likewise, the matrix of eigenvectors
780
is unitary if the matrix `a` is normal, i.e., if
781
``dot(a, a.H) = dot(a.H, a)``.
783
The left and right eigenvectors are not necessarily the (Hermitian)
784
transposes of each other.
936
the eigenvectors are linearly independent. Likewise, the (complex-valued)
937
matrix of eigenvectors `v` is unitary if the matrix `a` is normal, i.e.,
938
if ``dot(a, a.H) = dot(a.H, a)``, where `a.H` denotes the conjugate
941
Finally, it is emphasized that `v` consists of the *right* (as in
942
right-hand side) eigenvectors of `a`. A vector `y` satisfying
943
``dot(y.T, a) = z * y.T`` for some number `z` is called a *left*
944
eigenvector of `a`, and, in general, the left and right eigenvectors
945
of a matrix are not necessarily the (perhaps conjugate) transposes
950
G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL,
951
Academic Press, Inc., 1980, Various pp.
955
>>> from numpy import linalg as LA
957
(Almost) trivial example with real e-values and e-vectors.
959
>>> w, v = LA.eig(np.diag((1, 2, 3)))
962
array([[ 1., 0., 0.],
966
Real matrix possessing complex e-values and e-vectors; note that the
967
e-values are complex conjugates of each other.
969
>>> w, v = LA.eig(np.array([[1, -1], [1, 1]]))
971
array([ 1. + 1.j, 1. - 1.j])
972
array([[ 0.70710678+0.j , 0.70710678+0.j ],
973
[ 0.00000000-0.70710678j, 0.00000000+0.70710678j]])
975
Complex-valued matrix with real e-values (but complex-valued e-vectors);
976
note that a.conj().T = a, i.e., a is Hermitian.
978
>>> a = np.array([[1, 1j], [-1j, 1]])
981
array([ 2.00000000e+00+0.j, 5.98651912e-36+0.j]) # i.e., {2, 0}
982
array([[ 0.00000000+0.70710678j, 0.70710678+0.j ],
983
[ 0.70710678+0.j , 0.00000000+0.70710678j]])
985
Be careful about round-off error!
987
>>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]])
988
>>> # Theor. e-values are 1 +/- 1e-9
787
996
a, wrap = _makearray(a)
841
1050
def eigh(a, UPLO='L'):
843
Eigenvalues and eigenvectors of a Hermitian or real symmetric matrix.
1052
Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix.
1054
Returns two objects, a 1-D array containing the eigenvalues of `a`, and
1055
a 2-D square array or matrix (depending on the input type) of the
1056
corresponding eigenvectors (in columns).
847
1060
a : array_like, shape (M, M)
848
A complex Hermitian or symmetric real matrix.
1061
A complex Hermitian or real symmetric matrix.
849
1062
UPLO : {'L', 'U'}, optional
850
Specifies whether the calculation is done with data from the
851
lower triangular part of `a` ('L', default) or the upper triangular
1063
Specifies whether the calculation is done with the lower triangular
1064
part of `a` ('L', default) or the upper triangular part ('U').
856
1068
w : ndarray, shape (M,)
857
The eigenvalues. The eigenvalues are not necessarily ordered.
858
v : ndarray, shape (M, M)
859
The normalized eigenvector corresponding to the eigenvalue w[i] is
1069
The eigenvalues, not necessarily ordered.
1070
v : ndarray, or matrix object if `a` is, shape (M, M)
1071
The column ``v[:, i]`` is the normalized eigenvector corresponding
1072
to the eigenvalue ``w[i]``.
869
eigvalsh : eigenvalues of symmetric or Hemitiean arrays.
870
eig : eigenvalues and right eigenvectors for non-symmetric arrays
871
eigvals : eigenvalues of non-symmetric array.
1081
eigvalsh : eigenvalues of symmetric or Hermitian arrays.
1082
eig : eigenvalues and right eigenvectors for non-symmetric arrays.
1083
eigvals : eigenvalues of non-symmetric arrays.
875
A simple interface to the LAPACK routines dsyevd and zheevd that compute
876
the eigenvalues and eigenvectors of real symmetric and complex Hermitian
879
The number w is an eigenvalue of a if there exists a vector v
880
satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of
881
the characteristic equation det(a - w[i]*I) = 0, where det is the
882
determinant and I is the identity matrix. The eigenvalues of real
883
symmetric or complex Hermitean matrices are always real. The array v
884
of eigenvectors is unitary and a, w, and v satisfy the equation
885
dot(a,v[i]) = w[i]*v[:,i].
1087
This is a simple interface to the LAPACK routines dsyevd and zheevd,
1088
which compute the eigenvalues and eigenvectors of real symmetric and
1089
complex Hermitian arrays, respectively.
1091
The eigenvalues of real symmetric or complex Hermitian matrices are
1092
always real. [1]_ The array `v` of (column) eigenvectors is unitary
1093
and `a`, `w`, and `v` satisfy the equations
1094
``dot(a, v[:, i]) = w[i] * v[:, i]``.
1098
.. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando,
1099
FL, Academic Press, Inc., 1980, pg. 222.
1103
>>> from numpy import linalg as LA
1104
>>> a = np.array([[1, -2j], [2j, 5]])
1106
array([[ 1.+0.j, 0.-2.j],
1108
>>> w, v = LA.eigh(a)
1110
array([ 0.17157288, 5.82842712])
1111
array([[-0.92387953+0.j , -0.38268343+0.j ],
1112
[ 0.00000000+0.38268343j, 0.00000000-0.92387953j]])
1114
>>> np.dot(a, v[:, 0]) - w[0] * v[:, 0] # verify 1st e-val/vec pair
1115
array([2.77555756e-17 + 0.j, 0. + 1.38777878e-16j])
1116
>>> np.dot(a, v[:, 1]) - w[1] * v[:, 1] # verify 2nd e-val/vec pair
1117
array([ 0.+0.j, 0.+0.j])
1119
>>> A = np.matrix(a) # what happens if input is a matrix object
1121
matrix([[ 1.+0.j, 0.-2.j],
1123
>>> w, v = LA.eigh(A)
1125
array([ 0.17157288, 5.82842712])
1126
matrix([[-0.92387953+0.j , -0.38268343+0.j ],
1127
[ 0.00000000+0.38268343j, 0.00000000-0.92387953j]])
888
1130
a, wrap = _makearray(a)
933
1175
Singular Value Decomposition.
935
Factorizes the matrix `a` into two unitary matrices, ``U`` and ``Vh``,
936
and a 1-dimensional array of singular values, ``s`` (real, non-negative),
937
such that ``a == U S Vh``, where ``S`` is the diagonal
938
matrix ``np.diag(s)``.
1177
Factors the matrix `a` into ``u * np.diag(s) * v.H``, where `u` and `v`
1178
are unitary (i.e., ``u.H = inv(u)`` and similarly for `v`), ``.H`` is the
1179
conjugate transpose operator (which is the ordinary transpose for
1180
real-valued matrices), and `s` is a 1-D array of `a`'s singular values.
942
a : array_like, shape (M, N)
944
full_matrices : boolean, optional
945
If True (default), ``U`` and ``Vh`` are shaped
946
``(M,M)`` and ``(N,N)``. Otherwise, the shapes are
947
``(M,K)`` and ``(K,N)``, where ``K = min(M,N)``.
949
Whether to compute ``U`` and ``Vh`` in addition to ``s``.
1185
Matrix of shape ``(M, N)`` to decompose.
1186
full_matrices : bool, optional
1187
If True (default), ``u`` and ``v.H`` have the shapes
1188
``(M, M)`` and ``(N, N)``, respectively. Otherwise, the shapes
1189
are ``(M, K)`` and ``(K, N)``, resp., where ``K = min(M, N)``.
1190
compute_uv : bool, optional
1191
Whether or not to compute ``u`` and ``v.H`` in addition to ``s``.
950
1192
True by default.
954
U : ndarray, shape (M, M) or (M, K) depending on `full_matrices`
956
s : ndarray, shape (K,) where ``K = min(M, N)``
1197
Unitary matrix. The shape of `U` is ``(M, M)`` or ``(M, K)``
1198
depending on value of `full_matrices`.
957
1200
The singular values, sorted so that ``s[i] >= s[i+1]``.
958
Vh : ndarray, shape (N,N) or (K,N) depending on `full_matrices`
1201
`S` is a 1-D array of length ``min(M, N)``
1203
Unitary matrix of shape ``(N, N)`` or ``(K, N)``, depending
1050
1295
Compute the condition number of a matrix.
1052
The condition number of `x` is the norm of `x` times the norm
1053
of the inverse of `x`. The norm can be the usual L2
1054
(root-of-sum-of-squares) norm or a number of other matrix norms.
1297
This function is capable of returning the condition number using
1298
one of seven different norms, depending on the value of `p` (see
1058
1303
x : array_like, shape (M, N)
1059
1304
The matrix whose condition number is sought.
1060
p : {None, 1, -1, 2, -2, inf, -inf, 'fro'}
1305
p : {None, 1, -1, 2, -2, inf, -inf, 'fro'}, optional
1061
1306
Order of the norm:
1063
1308
===== ============================
1064
1309
p norm for matrices
1065
1310
===== ============================
1066
None 2-norm, computed directly using the SVD
1311
None 2-norm, computed directly using the ``SVD``
1067
1312
'fro' Frobenius norm
1068
1313
inf max(sum(abs(x), axis=1))
1069
1314
-inf min(sum(abs(x), axis=1))
1073
1318
-2 smallest singular value
1074
1319
===== ============================
1321
inf means the numpy.inf object, and the Frobenius norm is
1322
the root-of-sum-of-squares norm.
1079
1327
The condition number of the matrix. May be infinite.
1331
numpy.linalg.linalg.norm
1335
The condition number of `x` is defined as the norm of `x` times the
1336
norm of the inverse of `x` [1]_; the norm can be the usual L2-norm
1337
(root-of-sum-of-squares) or one of a number of other matrix norms.
1341
.. [1] G. Strang, *Linear Algebra and Its Applications*, Orlando, FL,
1342
Academic Press, Inc., 1980, pg. 285.
1346
>>> from numpy import linalg as LA
1347
>>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]])
1354
>>> LA.cond(a, 'fro')
1356
>>> LA.cond(a, np.inf)
1358
>>> LA.cond(a, -np.inf)
1368
>>> min(LA.svd(a, compute_uv=0))*min(LA.svd(LA.inv(a), compute_uv=0))
1082
1372
x = asarray(x) # in case we have a matrix
1095
1385
Calculate the generalized inverse of a matrix using its
1096
1386
singular-value decomposition (SVD) and including all
1097
`large` singular values.
1387
*large* singular values.
1101
a : array_like (M, N)
1391
a : array_like, shape (M, N)
1102
1392
Matrix to be pseudo-inverted.
1104
Cutoff for `small` singular values.
1105
Singular values smaller than rcond*largest_singular_value are
1394
Cutoff for small singular values.
1395
Singular values smaller (in modulus) than
1396
`rcond` * largest_singular_value (again, in modulus)
1111
The pseudo-inverse of `a`. If `a` is an np.matrix instance, then so
1401
B : ndarray, shape (N, M)
1402
The pseudo-inverse of `a`. If `a` is a `matrix` instance, then so
1118
In case SVD computation does not converge.
1408
If the SVD computation does not converge.
1412
The pseudo-inverse of a matrix A, denoted :math:`A^+`, is
1413
defined as: "the matrix that 'solves' [the least-squares problem]
1414
:math:`Ax = b`," i.e., if :math:`\\bar{x}` is said solution, then
1415
:math:`A^+` is that matrix such that :math:`\\bar{x} = A^+b`.
1417
It can be shown that if :math:`Q_1 \\Sigma Q_2^T = A` is the singular
1418
value decomposition of A, then
1419
:math:`A^+ = Q_2 \\Sigma^+ Q_1^T`, where :math:`Q_{1,2}` are
1420
orthogonal matrices, :math:`\\Sigma` is a diagonal matrix consisting
1421
of A's so-called singular values, (followed, typically, by
1422
zeros), and then :math:`\\Sigma^+` is simply the diagonal matrix
1423
consisting of the reciprocals of A's singular values
1424
(again, followed by zeros). [1]_
1428
.. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando,
1429
FL, Academic Press, Inc., 1980, pp. 139-142.
1433
The following example checks that ``a * a+ * a == a`` and
1434
``a+ * a * a+ == a+``:
1122
1436
>>> a = np.random.randn(9, 6)
1123
1437
>>> B = np.linalg.pinv(a)
1124
1438
>>> np.allclose(a, np.dot(a, np.dot(B, a)))
1198
1512
def lstsq(a, b, rcond=-1):
1200
Return the least-squares solution to an equation.
1514
Return the least-squares solution to a linear matrix equation.
1202
1516
Solves the equation `a x = b` by computing a vector `x` that minimizes
1203
the norm `|| b - a x ||`.
1517
the norm `|| b - a x ||`. The equation may be under-, well-, or over-
1518
determined (i.e., the number of linearly independent rows of `a` can be
1519
less than, equal to, or greater than its number of linearly independent
1520
columns). If `a` is square and of full rank, then `x` (but for round-off
1521
error) is the "exact" solution of the equation.
1207
1525
a : array_like, shape (M, N)
1208
Input equation coefficients.
1526
"Coefficient" matrix.
1209
1527
b : array_like, shape (M,) or (M, K)
1210
Equation target values. If `b` is two-dimensional, the least
1211
squares solution is calculated for each of the `K` target sets.
1528
Ordinate or "dependent variable" values. If `b` is two-dimensional,
1529
the least-squares solution is calculated for each of the `K` columns
1212
1531
rcond : float, optional
1213
Cutoff for ``small`` singular values of `a`.
1214
Singular values smaller than `rcond` times the largest singular
1215
value are considered zero.
1532
Cut-off ratio for small singular values of `a`.
1533
Singular values are set to zero if they are smaller than `rcond`
1534
times the largest singular value of `a`.
1219
x : ndarray, shape(N,) or (N, K)
1220
Least squares solution. The shape of `x` depends on the shape of
1222
residues : ndarray, shape(), (1,), or (K,)
1223
Sums of residues; squared Euclidian norm for each column in
1538
x : ndarray, shape (N,) or (N, K)
1539
Least-squares solution. The shape of `x` depends on the shape of
1541
residues : ndarray, shape (), (1,), or (K,)
1542
Sums of residues; squared Euclidean norm for each column in
1225
1544
If the rank of `a` is < N or > M, this is an empty array.
1226
1545
If `b` is 1-dimensional, this is a (1,) shape array.
1227
1546
Otherwise the shape is (K,).
1229
1548
Rank of matrix `a`.
1230
s : ndarray, shape(min(M,N),)
1549
s : ndarray, shape (min(M,N),)
1231
1550
Singular values of `a`.
1342
1660
Matrix or vector norm.
1662
This function is able to return one of seven different matrix norms,
1663
or one of an infinite number of vector norms (described below), depending
1664
on the value of the ``ord`` parameter.
1346
1668
x : array_like, shape (M,) or (M, N)
1348
ord : {2, int, inf, -inf, 'fro'}
1349
Order of the norm (see table under ``Notes``).
1670
ord : {non-zero int, inf, -inf, 'fro'}, optional
1671
Order of the norm (see table under ``Notes``). inf means numpy's
1354
Norm of the matrix or vector
1677
Norm of the matrix or vector.
1358
For values ord < 0, the result is, strictly speaking, not a
1359
mathematical 'norm', but it may still be useful for numerical
1681
For values of ``ord <= 0``, the result is, strictly speaking, not a
1682
mathematical 'norm', but it may still be useful for various numerical
1362
1685
The following norms can be calculated:
1375
1699
other -- sum(abs(x)**ord)**(1./ord)
1376
1700
===== ============================ ==========================
1702
The Frobenius norm is given by [1]_:
1704
:math:`||A||_F = [\\sum_{i,j} abs(a_{i,j})^2]^{1/2}`
1708
.. [1] G. H. Golub and C. F. Van Loan, *Matrix Computations*,
1709
Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15
1713
>>> from numpy import linalg as LA
1714
>>> a = np.arange(9) - 4
1716
array([-4, -3, -2, -1, 0, 1, 2, 3, 4])
1717
>>> b = a.reshape((3, 3))
1719
array([[-4, -3, -2],
1727
>>> LA.norm(b, 'fro')
1729
>>> LA.norm(a, np.inf)
1731
>>> LA.norm(b, np.inf)
1733
>>> LA.norm(a, -np.inf)
1735
>>> LA.norm(b, -np.inf)
1743
-4.6566128774142013e-010
1754
1.8570331885190563e-016
1381
1762
if ord is None: # check the default case first and handle it immediately
1382
1763
return sqrt(add.reduce((x.conj() * x).ravel().real))
1386
1768
return abs(x).max()
1387
1769
elif ord == -Inf:
1388
1770
return abs(x).min()
1772
return (x != 0).sum() # Zero norm
1390
1774
return abs(x).sum() # special case for speedup