Currently on main, test reports are as follows: 1 failed and 13 passed in 77 ms.
this 77ms includes load_and_monomorphize
, expect_mono_module_to_dylib
and executing the tests. Now in PR#6567 we want to report failures and passes per module but this leaves us with the question; what should we do with the timing of load_and_monomorphize
and expect_mono_module_to_dylib
? Because we're still only executing those once for all modules combined.
I'm thinking we should report that before running tests as Compiled in X ms.
. What do you think?
personally what I care about by default is noticing when my tests are taking a long time, so the number I'm really interested in is the total end-to-end time
if I want to debug why tests are taking a long time, I want to pass a flag of some sort (e.g. --verbose
) to get per-test timings, including for successes
and I think when a flag like that is enabled, it would make sense to distinguish between compile times and test times
but in the default case I like it being concise and focused on the number that matters most :smiley:
Yeah, makes sense :+1:
So the behavior here would be:
--verbose
flag that would report, separately: a) compilation time for all of the tests, and b) the runtime and number of passed/failed tests per moduleI like that -- keep the default case terse, but make it easy to drill down and find problematic tests (at least by module, for now). How does that sound?
I would still report tests passed/failed tests per module by default. You can argue that performance is a detail but for multiple modules I would definitely want to see fail/pass count per module.
I'm making the change like that now, will push in a bit...
hm, so personally I am a big proponent of "only show me what's relevant" for test output
and one of the things I dislike about test runners in big projects is that 99% of the output on my screen is "all the tests passed for this module I have never heard of and will never touch in my entire life"
I think the best way to combat that and keep the output relevant is to have the output be constant rather than linear in the number of modules, and only linear in the number of failed tests (because those are all actionable; I need to address all of those failures!)
at least by default
this is also why I don't like to print passed tests by default, except as a single number in the summary line
"all the tests passed for this module I have never heard of and will never touch in my entire life"
I get that but I also value transparency, if you have 5+ modules it can be easy for one to fly under the radar untested because you never see its name when running the tests.
Do you know of any popular test tools for a popular language that don't print a module name if all tests passed for it?
hm, pretty much all of those have separate test files rather than inline tests in the modules where the code lives. (Rust would be a notable exception.)
so I don't think any popular test runners attempt to solve the problem of "module was untested by mistake" :big_smile:
Ok fair, so do we go with this:
- Keep the default behavior the same as what it is now: only display the total number of tests that passed/failed, and the total runtime
- Add a --verbose flag that would report, separately: a) compilation time for all of the tests, and b) the runtime and number of passed/failed tests per module
sounds good to me! :100:
Last updated: Jul 06 2025 at 12:14 UTC