I'm curious what the platform paradigm will be as Roc gets more mature. Since you can only include a single platform per app it seems like the main dev strategy will be to fork the platform that seems closest to your domain and extend it. e.g. I wanted to add GNU MPFR to a Math lib in basic-cli platform but it doesn't have that so I forked basic-cli and added it. Is this congruent with what yall see?
Yes, I think for niche use cases that will be commona and encouraged. Common features for a particular domain will probably be included in a platform the targets that use case.
Also as the package ecosystem matures there will be more demand for writing things in pure Roc.
also, I think math is a specific case where doing it at the platform level won't have good long-term ergonomics because platforms can only provide effectful functions implemented in lower-level languages, not pure functions - so all the math operations would need to be effectful functions
so when it comes to math operations specifically, I'd be curious whether MPFR semantics could be implemented in pure Roc code with sufficient performance!
Richard Feldman said:
so when it comes to math operations specifically, I'd be curious whether MPFR semantics could be implemented in pure Roc code with sufficient performance!
I asked myself this question because for that library in particular it seems like languages do ffi for the most part because it's a very specific implementation like both julia and rust use the c lib. in julia it's compiled into runtime, in rust they use rug crate which is ffi
yeah, same with BLAS and LAPACK
so far it seems like the main reason not to have these in pure Roc is expediency; I haven't heard a case made that they'd be relevantly faster if Roc had C FFI, and allowing FFI of pure functions has a lot of downsides
I'm curious what you use MPFR for!
Richard Feldman said:
I'm curious what you use MPFR for!
I'm a math nerd so I use it for things like pi calculation formulas etc. Not really super pressing since I use julia for most math stuff, but my attempts to work in that world in Roc made me start thinking about such things.
Ostensibly the Roc stdlib could just wrap the lib, or is Roc stdlib purely Roc code?
it could, but we already have Dec - is that insufficient for pi calculation formulas?
Didn't know that type existed. Thanks
Richard Feldman said:
so far it seems like the main reason not to have these in pure Roc is expediency; I haven't heard a case made that they'd be relevantly faster if Roc had C FFI, and allowing FFI of pure functions has a lot of downsides
For some of those math libraries, raw assembly is needed for the speed. Next level down is just really good control of simd. Currently roc has neither. So you would definitely hit a limit in pure roc.
So I'm sure the limit will be found, the question is when and if it will matter for many.
Would be cool to see a super computer running roc apps for the core logic some day.
Brendan Hansknecht said:
Richard Feldman said:
so far it seems like the main reason not to have these in pure Roc is expediency; I haven't heard a case made that they'd be relevantly faster if Roc had C FFI, and allowing FFI of pure functions has a lot of downsides
For some of those math libraries, raw assembly is needed for the speed.
is it though?
I mean it's true that they use it, but is that because they started with llvm optimizations and found them inadequate, or because of some combination of:
basically I'm skeptical (but open to being convinced otherwise!) that there exist tons of heavy duty numeric computation projects where the combination of Python with C + asm FFI is sufficiently fast, but optimized llvm wouldn't be fast enough
like maybe that's true, but it certainly does not seem to be obviously true
and yes, Roc doesn't have simd support yet, but I want it to in the future...it's really unfortunate how fraught the API design space there is because of all the CPU arch differences :sweat_smile:
How much would need to change for future simd support in Roc? Is that all under the hood and not something that changes the language? I thought it was basically the Iter and fusion thing.. but I might be mis-remembering
it's just a design question really
we could add some kind of support for it anytime, the question is just what the code would look like
and I don't think there's a clear answer to the question of what the code should look like :sweat_smile:
Richard Feldman said:
is it though?
It's a solid question. As I learn more and more from folks working on kernels in mojo (which are mostly GPU nowadays, but we have some solid cpu kernels), it is more and more about giving the programming direct control rather than relying on the compiler.
That said, it is not a bunch of raw assembly. It is directly simd or direct control of memory hierarchies or etc.
Though mojo is weird in that it lets you directly output any mlir. Which means any llvm intrinsics. Which means almost emitting raw assembly if you need it.
But yeah, a roc loop with a good simd abstraction should be able to optimize exceptionally well.
With roc we just have to continue to fight any refcounting in tight loops that might ruin perf.
And without a good simd abstraction definitely hit a wall way way sooner. Cause llvm only turns really clean loops into simd.
Anyway, I overall agree with the general thesis....just less by compiler optimization and more by exposing control.
yeah and I'm fine in principle with exposing control, as long as it doesn't violate our invariants (e.g. not introducing memory unsafety!) or make the language overly complicated
of course that's the tricky API design part :smile:
oh also sacrificing cross-platform is something I'd really want to avoid
like for example if we wanted to just expose simd intrinsics for a single CPU architecture, then maybe we could do something with var
because var already has the rules you'd want for representing a register - can't cross function boundaries, doesn't affect the function's purity, etc. - but then how do you deal with the architecture-specific stuff?
and wasm for that matter
Last updated: Jan 12 2026 at 12:19 UTC