regarding @Folkert de Vries's comment in the meetup about compile-time floating-point answers differing depending on what hardware they happened to be run on: here are a couple of C floating-point software emulation libraries that could be used for emulation
Weird part about that is it may still lead to unexpected results for end users. Cause even though you now have consistently defined results, you no longer are doing what is expected of hardware. So a user who only runs on one hardware and expects hardware results may not get what they expect.
On top of that, what do we do about the extra flags you can set around rounding and such.
so my overall thought is:
Dec
.roc
flags (e.g. --target
), you should get the exact same bytes out the other side, maybe give or take metadata like the timestamp of when it was built or something like that.so the problem I'm interested in solving is reproducibility - I wouldn't want compile-time floats to give a different binary on my local machine vs on CI (again, maybe give or take metadata like timestamps, but I think it's possible we end up wanting to eschew timestamps like that because bit-for-bit reproducible binaries might be more valuable than those) even if my local machine has a different CPU architecture than my CI does
Sounds good. Just wanted to check if the slowness of software floats was worth it. Maybe we can make it so software floats are only needed if you cross compile. If you compile for the host target, you can just set the flags to the hardware mode we want and use hardware floats?
So consistent and often able to be fast at least for targets with the same float unit(though I don't know all of the hardware intricacies)
Last updated: Jul 06 2025 at 12:14 UTC