Posted by todsacerdoti 4 days ago
/* Simplifications of operations with one constant operand and
simplifications to constants or single values. */
(for op (plus pointer_plus minus bit_ior bit_xor)
(simplify
(op @0 integer_zerop)
(non_lvalue @0)))
https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/match.pdThe floating point case can be more complicated because of signed zeros and signaling NaNs, see fold_real_zero_addition_p.
And does Z3 verification indicate differences in output due to minimizing float-rounding error?
https://docs.python.org/3/library/math.html#math.fma :
> math.fma(x, y, z)
> Fused multiply-add operation. Return (x * y) + z, computed as though with infinite precision and range followed by a single round to the float format. This operation often provides better accuracy than the direct expression (x * y) + z.
> This function follows the specification of the fusedMultiplyAdd operation described in the IEEE 754 standard.
(but yes, turning the python expression x*y+z into an fma call would not be a legal optimization for the jit anyway. And Z3 would rightfully complain. The results must be bitwise identical before and after optimization)
math.fma() would be a different function call with the same or similar output.
(deal-solver is a tool for verifying formal implementations in Python with Z3.)
How does the DSL handle side effects? I'm assuming any term here could be a compound expression, so what part of the system prevents "0*callfn()" from reducing to "0"?
I suppose the notation of the rule heads makes it look like there actually are expression trees, which is maybe sort of confusing.