I have a simple Python test to get the sum of several items in a list that I am putting in a torch Tensor:
lst = [0.0014, -0.0306, 0.0005, 0.0011, 0.0012, 0.0022, 0.0017, 0.0011,
0.0017, 0.0011, 0.0012, 0.0017, 0.0014, 0.0015, 0.0010, 0.0006,
0.0006, 0.0004, 0.0009, 0.0007, 0.0008, 0.0007, 0.0013, 0.0013,
0.0015, 0.0023, 0.0006]
LEN=27
trch = torch.Tensor([lst])
print('--------------------------------------------------------------')
print(trch.sum(1, keepdim=True))
print(sum(lst))
print(trch @ torch.ones((LEN,1)))
print(torch.mm( trch , torch.ones((LEN,1))))
trch_sum= 0
for num in lst:
trch_sum += num
print(trch_sum)
and I get the following (reasonable) results:
tensor([[-0.0001]])
-9.999999999999912e-05
tensor([[-9.9999e-05]])
tensor([[-9.9999e-05]])
-9.999999999999972e-05
Changing the last number of the list to 0.0007 significantly impacts the results, however.
tensor([[-9.3132e-10]])
9.215718466126788e-19
tensor([[1.1642e-09]])
tensor([[1.1642e-09]])
3.2526065174565133e-19
I appreciate that this is a floating point situation, but is there a way to improve it?
This is a numerics issue.
When you create the torch tensor of
lst, the values are automatically cast to fp32, which loses precision. You can improve precision by explicitly forming the tensor as fp64, but there will still be some variance