In working on #6258, trying to get CI tests to pass and getting tests results on my machine to match the CI results has been a challenge. Is there a doc with testing recommendations that I missed perhaps? Or a script that runs all the same tests CI runs? If not, could we create some documentation to take some of the surprises out of testing?
Here's what I encountered:
gen_wasm
, because I was expecting an I64
, but wasm was generating an I32
. Makes perfect sense. I just disabled the test for gen_wasm
(figured that might be preferable to creating a separate tests for wasm?). The issue is that I didn't know I should run these tests before pushing.Maybe this is just a symptom of my inexperience with large projects, or with Rust. Either way, I feel a little lost.
Ok, I see that CONTRIBUTING.md
suggests testing with --release
. My own fault for forgetting about this.
I now have a test that is failing in CI on MacOS Arm only, and I can't replicate it on my machine.
There are several flaky failures on the apple silicon CI machine, I've now added this to my high priority issues.
The issue is that I didn't know I should run these tests before pushing.
There are several groups of tests that can both take a long time to run and rarely fail. I don't think we should recommend contributors to execute all of them before every PR.
Anton said:
I now have a test that is failing in CI on MacOS Arm only, and I can't replicate it on my machine.
There are several flaky failures on the apple silicon CI machine, I've now added this to my high priority issues.
At least the flakes are gluten free!
Anton said:
The issue is that I didn't know I should run these tests before pushing.
There are several groups of tests that can both take a long time to run and rarely fail. I don't think we should recommend contributors to execute all of them before every PR.
Hmm ok, makes sense. Would it make sense to test all code generation when your PR adds a builtin? Or would that not make sense for most builtin contributions?
Would it make sense to test all code generation when your PR adds a builtin?
I'm not sure, I think it's quite rare for the gen_wasm tests to fail for example.
I think you could argue that I did this to myself by naively including feature = gen_wasm
on my test.
In hindsight it seems like a newbie mistake that probably isn’t common
Well it's nice to test with wasm for sure, it's just that you happened to test with a 64 bit number and wasm works with 32 bit.
Right. I should have written a separate test for wasm if I wanted to test it.
On that note, is it better to just write a nearly identical test for wasm in this case? I honestly wasn’t sure what was preferable.
Yeah, that's fine
you happened to test with a 64 bit number and wasm works with 32 bit.
This is a common misconception about Wasm!
Only Wasm's pointers are 32-bit. Integers can be 32 or 64.
The misunderstanding comes from thinking Wasm probably works like old 32-bit CPUs, but it doesn't work like that.
Wasm is designed to run on 64 bit machines but use small amounts of memory. So it has math ops for 64-bit but only pointer ops for 32-bit.
So this should not be happening and it is a bug.
Interesting! With that in mind, what is the most helpful thing I can do with this new test?
Last updated: Jul 06 2025 at 12:14 UTC