import numpy as np
from fxpmath import Fxp as fxp
t1 = fxp(4, False, 9, 3)
t2 = fxp(5, False, 9, 3)
inter1 = fxp(0, True, 9, 3, raw=True)
inter2 = fxp(0, True, 9, 3, raw=True)
inter1(t1 - t2)
inter2(t1.get_val() - t2.get_val())
inter1 is equal to 0.00 because the two numbers used in the calculation are unsigned. The only way I found to have the correct result is to use numpy values (from get_val()). Is there a best way to do this ? I don't want to put t1 or t2 in signed.
The solution proposed by @jasonharper is right, but it might not be what you're trying to model.
When you do:
You are converting
t1andt2to signed just asinter1, so you are performing a signed subtraction, and I think it's not what you want to model.Casting an unsigned fxp to a signed one force msb (most significant bit) to
0if you keep size. That will generate a wrong calculation ift1and/ort2are equal or bigger than signed upper limit (31.875 for your example). For example:If you are trying to model a unsigned subtraction and then storing that value in a signed fxp you have to use wrap overflow. Additionally, if you wanna keep size at subtraction (or other operation) you have to set
op_sizing = 'same':If you check the binary format, you'll see that the subtraction is correct:
Look that the msb is equal to
1, so when you cast this value as a signed fxp this will be negative:Now,
Finally, let me write a more elegant way to work with same fxp sizes along your code: