Why is Z3 "rounding" small reals to 1.0/0.0?

181 Views Asked by At

I'm using Z3's python API to do some linear real arithmetic. I have encountered a situation where reals very close to zero are somehow converted to 1.0/0.0. This in turn causes a floating point exception inside the C++ part of Z3.

For instance I have the following Python program:

from z3 import *
s = Solver()
s.add(0.00001 * Real("a") + 0.00001 * Real("b") > 0.0)
print(s.to_smt2())
result = s.check()
print(result)
print(s.model())

which produces the following output:

; benchmark generated from python API
(set-info :status unknown)
(declare-fun b () Real)
(declare-fun a () Real)
(assert
 (let ((?x10 (+ (* (/ 1.0 0.0) a) (* (/ 1.0 0.0) b))))
 (> ?x10 0.0)))
(check-sat)

Floating point exception (core dumped)

If I instead replace line 3 with

#s.add(Q(1,100000) * Real("a") + Q(1, 100000) * Real("b") > 0.0)

I get my expected output.

Does anyone have an explanation and a way how I can get this to work by using the normal Python floats?

2

There are 2 best solutions below

2
On BEST ANSWER

So as it turns out, Z3 does not take the float as is, but instead converts it into a string and then tries to parse that string. For small numbers Pythons str() defaults to scientific notation, which can apparently not be properly parsed by Z3. Using "{:.20f}".format(my_small_float) or similar explicit conversions solved my problem.

1
On

I cannot replicate this. When I run your program, I get:

; benchmark generated from python API
(set-info :status unknown)
(declare-fun b () Real)
(declare-fun a () Real)
(assert
 (let ((?x10 (+ (* (/ 1.0 100000.0) a) (* (/ 1.0 100000.0) b))))
 (> ?x10 0.0)))
(check-sat)

sat
[b = 50000, a = 0]

which does not have the (/ 1.0 0) term you're talking about.

What version of z3 are you using? Perhaps you've got an old version that had a bug which got fixed? I'm on 4.8.13 (from GitHub master), though you can also go with the latest official release of 4.8.12.

NB Having said that, I think your "workaround" of using Q(1, 100000) is actually quite preferable, since it'll avoid all sorts of floating-point precision problems. Unless you're solving a floating-point logic problem, you should really never use float conversions this way, as there'll be gotchas about representation issues.