splitting off from this:
Brendan Hansknecht said:
If you have the bytes
[ 0x00, 0x00, 0x00, 0x07 ]
as your 32 bit big endian number input, you do:when bytes is [b3, b2, b1, b0, ..] -> num = (b3 << 24) | (b3 << 16) | (b3 << 8) | b3
Now
num
is whatever native endian happens to be
this is unfortunate because it breaks the design goal of “Roc code gives the same answer no matter what target it’s run on”
I wonder how we can fix this without hurting performance
:thinking: actually, is the bit shifting the problem? Or is it that integer byte order can be observed at all without specifying endianness?
I don't think bit shifting is giving a different answer depending on endianness here. num
will be the same integer on any target, it's just that its unobservable in-memory representation will depend on the target
Oh, missed this thread, this is not an issue
I am explicitly receiving the bytes in big endian and then mapping them to a native endian integer. It does not exposed what native endian is
And this is how to map little to native:
when bytes is
[b0, b1, b2, b3, ..] ->
num = (b3 << 24) | (b3 << 16) | (b3 << 8) | b3
oh interesting - so it’s a way to go to native but then once it’s in native, nothing about native can leak?
yeah
Last updated: Jul 06 2025 at 12:14 UTC