Is it possible to use orthogonal distance regression in a multiple regression fit where 1 of the independent variables is perfectly measured (i.e. has no uncertainty)?
Here's an example of what I mean: y is a function of x_0 and x_1. Both y and x_1 are measured with uncertainties, but x_0 has no uncertainty (e.g. date of measurement). To represent this, I set the uncertainty of x_0 to be an array of zeros. I use this code to evaluate my problem:
import numpy as np
from scipy import odr
def model_test(B, x):
x_0, x_1 = x[0], x[1]
return B[0]*x_0 + B[1]*x_1**2 + B[2]
N = 1000
x_0 = np.linspace(0, 100, N)
x_1 = np.linspace(0, 100, N)
x = np.row_stack([x_0, x_1])
sx = np.row_stack([np.full(N, 0.0), np.random.random(N)])
sy = np.random.random(N)
y = model_test([3.0, 4.0, 5.0], x)
model = odr.Model(model_test)
data = odr.RealData(x = x, y = y, sx = sx, sy = sy)
odr_test = odr.ODR(data = data, model = model, beta0 = [1.0, 2.0, 3.0])
output = odr_test.run()
output.pprint()
When I try running this I get a divide by zero error:
RuntimeWarning: divide by zero encountered in true_divide
return 1./numpy.power(sd, 2)
Beta: [nan nan nan]
Beta Std Error: [0. 0. 0.]
Beta Covariance: [[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
Residual Variance: 0.0
Inverse Condition #: 0.001067101765072725
Reason(s) for Halting:
Numerical error detected
Is there an issue with how I've defined my variables, or is this a fundamental limitation of ODR? If the latter, what other methods could be used for this purpose?