1.

Why Allow Cast(float) If It Isn't Supposed To Work?

Answer»

The FLOATING point rules are such that transforming cast(real)cast(float) to cast(real) is a valid transformation. This is because the floating point rules are written with the following principle in mind:

An algorithm is INVALID if it BREAKS if the floating point precision is increased. Floating point precision is always a minimum, not a maximum.

Programs that legitimately depended on maximum precision are:

  1. compiler/library VALIDATION test suites
  2. ones trying to programmatically test the precision
  • is not of value to user programming, and there are alternate ways to test the precision.
  • D has .PROPERTIES that take care of that.

Programs that rely on a maximum accuracy need to be rethought and reengineered.

The floating point rules are such that transforming cast(real)cast(float) to cast(real) is a valid transformation. This is because the floating point rules are written with the following principle in mind:

An algorithm is invalid if it breaks if the floating point precision is increased. Floating point precision is always a minimum, not a maximum.

Programs that legitimately depended on maximum precision are:

Programs that rely on a maximum accuracy need to be rethought and reengineered.



Discussion

No Comment Found