Matrix inversion for matrix with large values in python

1.5k Views Asked by At

I'm doing matrix inversion in python, and I found it very weird that the result differs by the data scale.

In the code below, it is expected that A_inv/B_inv = B/A. However, it shows that the difference between A_inv/B_inv and B/A becomes larger and larger depend on the data scale... Is this because Python cannot compute matrix inverse precisely for matrix with large values?

Also, I checked the condition number for B, which is a constant ~3.016 no matter the scale is.

Thanks!!!

import numpy as np
from matplotlib import pyplot as plt

D = 30
N = 300

np.random.seed(10)
original_data = np.random.sample([D, N])
A = np.cov(original_data)
A_inv = np.linalg.inv(A)


B_cond = []
diff = []

for k in xrange(1,10):
    B = A * np.power(10,k)
    B_cond.append(np.linalg.cond(B))
    B_inv = np.linalg.inv(B)

    ### Two measurements of difference are used

    diff.append(np.log(np.linalg.norm(A_inv/B_inv - B/A)))
    #diff.append(np.max(np.abs(A_inv/B_inv - B/A)))



# print B_cond

plt.figure()
plt.plot(xrange(1,10), diff)
plt.xlabel('data(B) / data(A)')
plt.ylabel('log(||A_inv/B_inv - B/A||)')
plt.savefig('Inversion for large matrix')
2

There are 2 best solutions below

1
On

I may be wrong, but I think it comes from number representation in machine. When you are dealing with great numbers, your inverse matrix is going to have very little number in magnitude (close to zero). And clsoe to zero, the representation of the floating number is not precise enough, I guess... https://en.wikipedia.org/wiki/Floating-point_arithmetic

2
On

There is no reason that you should expect np.linalg.norm(A_inv/B_inv - B/A) to be equal to anything special. Instead, you can check the quality of the inverse calculation by multiplying the original matrix by its inverse and checking the determinant, np.linalg.det(A.dot(A_inv)), which should be equal to 1.