How to mitigate floating point error in recursive DSP calculations

39 Views Asked by At

I have implemented a modified linear prediction coding function like this:

def modified_lpc_process(s: NDArray, p: int, fs: int) -> NDArray:
    a = lpc_coefficients(s, p)
    a_hat = modified_pole_coeffs(a, fs=fs)
    # a_hat = a # debug line
    s_hat = numpy.zeros(len(s))

    for n, sn in enumerate(s):
        as_accum = 0
        for k in range(len(a)):
            if n - 1 - k < 0:
                break
            as_accum += a[k] * s[n - 1 - k]
        un = sn + as_accum

        as_hat_accum = 0
        for k in range(len(a_hat)):
            if n - 1 - k < 0:
                break
            as_hat_accum += a_hat[k] * s_hat[n - 1 - k]

        s_hat[n] = -as_hat_accum + un
        round(s_hat[n], 8)

    return s_hat

Which essentially implements the maths here: enter image description here

I have noticed that the floating point error in this operation quickly blows up and I end up with sample values too large. In an effort to debug, I've set the modified pole coefficients a_hat to be the same with a, expecting to get the same value for s_hat and s; but the same issue is still there. I've also just realised that in floating point, adding a number and subtracting the same number you end up with slightly different results, which in recursive applications like this can accumulate to large differences.

I wonder what the best practice is in this case to mitigate this error accumulation?

0

There are 0 best solutions below