I find a serious but unexplainable bug in the evalf function when evaluating a symbolic determinant of a matrix, see MWE below
import sympy
z1,z2,z3,mv1,mv2,mv3,R,T,D12,D23,D13,D14,D24,D34,x1,x2,x3 = sympy.symbols('z1,z2,z3,mv1,mv2,mv3,R,T,D12,D23,D13,D14,D24,D34,x1,x2,x3')
matrix = sympy.Matrix([ [ x2/D12 + x3/D13 + 1/D14 , -x2/D12 , -x3/D13 , mv1/R/T ] ,
[ -x1/D12 , x1/D12 + x3/D23 + 1/D24 , -x3/D23 , mv2/R/T ] ,
[ -x1/D13 , -x2/D23 , x1/D13 + x2/D23 + 1/D34 , mv3/R/T ] ,
[ z1*x1 , z2*x2 , z3*x3 , 0 ] ])
det1 = sympy.det(matrix.evalf(subs={ z1:-1,z2:-1,z3:1,mv1:50E-6,mv2:90E-6,mv3:150E-6,R:8.314,T:298,D12:1E-9,D23:9E-9,D13:3E-9,D14:0.165171466666667,D24:0.0917619259259259,D34:0.0550571555555556,x1:0.3,x2:0.2,x3:0.5}))
det2 = sympy.det(matrix).evalf(subs={z1:-1,z2:-1,z3:1,mv1:50E-6,mv2:90E-6,mv3:150E-6,R:8.314,T:298,D12:1E-9,D23:9E-9,D13:3E-9,D14:0.165171466666667,D24:0.0917619259259259,D34:0.0550571555555556,x1:0.3,x2:0.2,x3:0.5})
det3 = sympy.det(matrix).subs({ z1:-1,z2:-1,z3:1,mv1:50E-6,mv2:90E-6,mv3:150E-6,R:8.314,T:298,D12:1E-9,D23:9E-9,D13:3E-9,D14:0.165171466666667,D24:0.0917619259259259,D34:0.0550571555555556,x1:0.3,x2:0.2,x3:0.5})
print(det1)
print(det2)
print(det3)
produces answers of:
det1: 3.05615930451006e-7
det2: -439498375.118201
det3: 0
showing that evalf() on the symbolic output from sympy.det() has a serious bug -- all three answers should be near zero (to within ~1E-7 of numerical error). Method 1 and 2 work fine but clearly the det2 method has a huge bug. Using higher precision with evalf doesn't help.
Grateful for anyone to help: does anyone know if I've made a mistake here, or does sympy have a big bug here?