Since the difference between 1.0 + 0.000001
and 1.0
is larger than 0.000000001
, I would expect
Num.isApproxEq (1.0 + 0.000001) 1.0 { atol: 0.000000001 }
to return Bool.false
. It returns Bool.true
. (I'm using a roc binary that I compiled by myself on Feb 28, 2024).
Adding a little bit of formatting, Num.isApproxEq
is implemented as follows:
isApproxEq = \value, refValue, { rtol ? 0.00001, atol ? 0.00000001 } ->
(value <= refValue && value >= refValue) || Num.absDiff value refValue <= atol + rtol * Num.abs refValue
I believe that (value <= refValue && value >= refValue)
evaluates to Bool.false
, so that we only have to look at Num.absDiff value refValue <= atol + rtol * Num.abs refValue
. For the example given above, the latter expression evaluates to
0.000001 <= 0.000000001 + 0.00001 * 1.0
which evaluates to Bool.true
. The default relative tolerance dominates the specified absolute tolerance. Is this expected behaviour?
Mixing default values with specified values feels a little bit odd to me. When both absolute and relative tolerance are specified, adding atol
and rtol * Num.abs refValue
also is somewhat unexpected. In this case, I would expect that the result is true if and only if both conditions are true individually. Maybe default values should only be applied if neither absolute nor relative tolerance are specified?
It is expected behaviour though I do minorly plan to change the function implementation.
Will give a more full reply when I have time tomorrow.
So this function matches the design and implementation of many functions of this format. The prime example probably being numpy.isclose.
Using both absolute and relative tolerance together is the main point of the function. Absolute tolerance is meant to check if the value is close when it is small. Relative tolerance is meant to check if the value is close when it is medium to large. Together, they check if a value is close no matter the size of the value.
In my experience, these kinds of functions are used most soundly when atol is kept as small as possible. In a perfect world, atol is just for checking if a value is close to 0 or not. Cause all values have a 100% relative difference from zero. Relative tolerance is generally the better value to tune. Relative tolerance is essentially the number of significant figures that you want your values to be matching for. In the default case, the values must be matching to ~5 significant figures due to an rtol of 0.00001
.
That is interesting. I never saw this idea. Thank you very much for the explanation.
Michael Pfeifer has marked this topic as resolved.
Last updated: Jul 06 2025 at 12:14 UTC