I think we have a design for the platform to provide an instruction to roc so that it can build itself (all the object files etc) with the new module changes. However, until then we have hardcoded how to build zig platforms, and I wonder if we can find a quick and easy way to modify that to unlock some more functionality?
I was thinking about how we don't need that app module name any longer and wondered if we could in a similar vein, use the string in the platform name as the instruction to roc cli?
platform "zig build all" # use this as an intermediate solution until module changes??
requires {} { main : _ }
This would mean I could write a zig platform using build.zig
today that generated the pre-built platform objects for linux/mac/windows etc and I am hoping --bundle
can then package to make a URL file.
If you want something quick and super simple, I would just have roc check for a build.zig like we do with cargo.toml
I actually want to go in the other direction - I want to have roc
no longer know about other programming languages
that was always intended to be a temporary let's-get-up-ad-running thing
so instead I'd prefer if the build.zig
became the way to build
I think I may want to work on this cause it is one of those old issues that is now starting to annoy me quite a bit.
fundamentally, we don't really want roc to control the full build process. We want it to really just point to a directory that has all the precompiled stuff we need. This is a package, just in an already extracted folder.
My thought on changes. I see two main options. One that is more convenient from a user perspective and another that is cleaner overall.
Option 1, more convenient for users:
Option 2, cleaner but less convenient:
Option 2 is cleaner and simpler in the compiler, but it means that new users who want to run examples in the examples
folder will need to manually compile the platform. So more readme docs and things won't just work with roc example/....roc
Either of these options remove all of our special platform building code and give platforms direct control over their build script. We still would keep the legacy linker and it hacky library includes. (note: we could maybe let the platforms specify extra linker flags that are required for the legacy linker to work)
Thoughts?
With option 2, we probably also want to improve error messages to mention that a user needs to build the local platforms. Then give a readme in each local platform and probably a build.sh
I think Richard mentioned he thought a build.roc
script might be a good default way to do this.
I think it is useful for roc to call this script, maybe we change --prebuilt-platform
to be --rebuild-platform
or something which just runs that roc script first to re-build the platform and then builds the app.
Is there a reason we need a special folder?
Can we have all the build artifects alongside platform/main.roc like we currently do? I imagine most build scripts could manage this.
I'm against a build.roc
script, it is extra complexity of no gain. No need for an extra platform. We have bash and other compilers build systems. They can manage this.
As for the special folder. I'm tired of everything being dumped in the platform folder. Build directories and caches are standard because they avoid filling source folders with arbitrary files that need to be gitignored. I want to enable output to be in zig_out
for a zig platform. I think everything being dumped in the platform folder is legacy convenience and it should go away.
I think option 2 is the way to go
part of the reason for that is that if we try option 2 and it turns out to be a major pain point, then we can always reconsider incorporating additional functionality into roc
but if we try the other way around, we'll never know if we could have gone with the simpler design
another reason is that I think it makes it clearer for end users where errors and/or build slowness are coming from
one more reason is that it can be faster end to end; if you want to rebuild the platform every single time, then the extra step is slightly annoying (although not much if you do like cargo build && roc run
at the command line)
but if you only want to rebuild the local platform once and then run the app on it a bunch of times, then it's definitely faster end to end to have them split up
and not have roc
even attempt to run anything, like it does today
Brendan Hansknecht said:
I'm tired of everything being dumped in the platform folder. Build directories and caches are standard because they avoid filling source folders with arbitrary files that need to be gitignored. I want to enable output to be in
zig_out
for a zig platform. I think everything being dumped in the platform folder is legacy convenience and it should go away.
oh yeah 100% agree, that was never supposed to be the long-term design :big_smile:
and we have an --output
flag now, so it would be natural to respect that when doing roc build
on a platform
this is really exciting! I'm pumped to see this happen :smiley:
the status quo design is a hack from like...2019?
also I agree that the examples/
folder should use basic-cli
instead of locally built platforms, with the exception of the one set of examples dedicated to showing how building platforms locally works (and those should have instructions in the README for how in order to build those platforms, you need to use cargo
or zig
or perhaps even make
for C ones)
Cool I think I am fully on the same page. Sounds nice!
For local package dependencies, would it be preferable to just take a path to a folder or to have the platform main.roc specify the folder location? I currently assume path to folder.
Also, do we want a file://
prefix? I assume not, but just curious.
I think for local package dependencies it should be a path to main.roc
no file://
prefix
or rather, a path to whatever the .roc
file is (whether it's named main.roc
or not) that's the actual platform
module in question
Ok. so then we would need the platform to specify where it stores the output linux_x64.o
and friends if we want to avoid them still all needing to be in the platform folder. That or we hardcode some other subfolder.
So I guess this would still require a change like:
platform "zig_out/"
requires {} { main : _ }
:thinking: actually, what if we used caching for this?
like we want to be taking the BLAKE3 hash of source files anyway, and then storing intermediate compiled values in a ~/.cache/roc
(or similar) directory
wouldn't that be the natural place to put these?
Not if zig
or cmake
or etc is controlling the build
ah true
Also, I was assuming for local build we would just ignore the hash part
ok then yeah I think we should specify it in the platform
module header
if desired, we could go straight to the design I think we want to eventually have, which is that the platform
module header specifies on a per-target basis where to find its precompiled files
which specifies not only output directory, but also filename, and then also (perhaps most importantly) which targets it actually supports
which would address current problems like basic-cli
doesn't support wasm, but there's no way to know that - let alone to get a helpful compiler error if you try to build it with --target wasm32
Yeah, I guess it depends if we want to make that explicit or implicit. My thought was that it would specify an artifact folder. We would then check that folder for our expected files like linux_x86.o
. If our expected file doesn't exist, we would print an error about needing to read the platform readme and compile it for linux x86.
concretely, I'm talking about something like
platform …
targets {
linux-x64 => "zig_out/linux-x64.rh",
wasm32 => "zig_out/wasm32.rh",
}
I think it's nicer for this to be explicit
Fair.
though what if the platform supports both legacy and surgical linker
Like legacy as fallback or to get better debug info.
I guess we could also just make that explicit:
linux-x64-legacy
or something.
hm, what would the distinction be there? :thinking:
like you mean they'd build two different .rh
files?
for the same target
one .rh
and one .a
or .o
That is what you will see in basic cli for linux_x64 currently for example.
hm but why would they need to build different ones?
because it would have something that breaks the surgical linker but works for the legacy linker?
two main reasons currently:
Long term, everything should be fine to only be the surgical linker version, but I don't think were fully there yet.
hm I'd say let's not introduce the complexity and revisit if it actually comes up in practice
like if it's a problem they can (for example) always build for legacy linker and tell people to use that
but I really would like 2023 to be the last year we have --linker=legacy
if possible :big_smile:
It does. I use --linker=legacy
all the time with gdb
when debugging platform glue issues and similar.
oh sure, but if you're the platform author you can just build for that locally right?
I'd think the inconvenience of only having one would mainly apply to people getting the platform from a URL
I guess, just have to update a single line from linux-x64 => "zig_out/linux-x64.rh"
to linux-x64 => "zig_out/linux-x64.o"
That or compile the roc app to a lib and just avoid the roc linking step all together instead letting the host compiler deal with that linking as well.
yeah, either seems fine!
Ok. Sounds good.
as an aside, we should really spec out what adding debug info would look like - seems like maybe something where if we got it started, could be a cool first contribution to that part of the code base for someone looking to get involved!
For sure. If debug info is concatenative, it may be possible that the surgical linker just needs to also copy it over and then update some offsets. No idea.
that would be sweet if so!
One more question on the targets syntax, should it support host
or current-system
or whatever we want to name it?
I assume that would be useful in platforms that don't know how to cross compile. Instead they just build for the host system when built locally.
Oh, another question I just realized, should packages now be expected to also follow the structure specified in the targets
syntax? Should a zig platform potentially extract to be zig-out/linux-x64.o
? That way we can still read the main.roc
file to understand where to find these files? No concerns of the platform following our exact naming conventions.
Actually as I am thinking about this, I guess that roc build --bundle
(probably should change to roc bundle
cause it won't build anymore) can read the platform/main.roc
and just copy all of the platform specified files to exactly where we want them in the final bundle.
yeah that makes sense!
(regarding roc bundle
)
I think the keys to the mapping should follow our --target
syntax, but the values can be whatever the author wants
like I could do linux-x64 => zig_out/x86_64-musl-linux.rh
if I wanted to
Ok. So I guess the scope of this work is:
1) Update all platforms in the roc repo to have their own build scripts and readme (later update basic-cli and basic-webserver)
2) Add the target syntax to enable platforms to specify where they store artifacts
targets {
linux-x64 => "zig_out/linux-x64.rh",
wasm32 => "zig_out/wasm32.rh",
}
3) update roc build
with a local platform to just grab the specified file instead of rebuilding the platform.
4) update roc build --bundle
to grab the specified file for each target instead of just assuming they will be in the platform directly as linux_x64.o
, etc. It can build the exact expected directly in ~/.cache/roc
if it doesn't already.
5) extract roc build --bundle
into a standalone roc bundle
6) add nicer error messages about unsupported targets for platforms and unbuilt targets for platforms.
Brendan Hansknecht said:
One more question on the targets syntax, should it support
host
orcurrent-system
or whatever we want to name it?I assume that would be useful in platforms that don't know how to cross compile. Instead they just build for the host system when built locally.
hm, I'm not sure if it would matter in practice, because:
.rh
file for each of those targets one way or another, and from roc
's perspective, it doesn't really matter whether I need to use multiple operating systems to generate themthat plan sounds great! :heart_eyes:
Aside, we may eventually want a roc bundle --merge
command that would enable a platform author to generate bundles for each os (if they can't cross compile) then merge all of the os specific bundles into one generic os bundle.
interesting!
just sounds more convenient than wrangling all of the .rh
and .rm
and .o
etc files and dumping them into the same directory on a single machine. Instead, I just run github ci on all of my target systems. They all generate a bundle. I have a follow up step that runs merge and publishes.
hm, but a bundle is just a compressed tarball plus hashing the final thing
so I'm not sure if that saves any effort compare to just gathering all the binaries and running bundle
afterwards :big_smile:
as in, gathering bundles and gathering binaries seems like the same amount of effort
Hmm, yeah probably. I guess it just is a little more folder structure and maybe two files instead of one. Yeah, probably pretty minor work. nvm
Filed tracking issue #6037
what are thoughts on using zig
as the compiler for the example c platform such that it gets super simple cross compilation?
Though, in general, I guess that example platforms don't really need to be able to cross compile. They can just be compiled for the current host machine cause they will never be distributed.
There's some benefit in having a C platform with a Make or CMake build, because it shows Roc can be embedded into your current project that uses that build system.
That's fair. Though I'm not sure I want to go through the hassle of cmake instead of just having a build.sh. though cmake also deals with windows, so maybe that is better.
Last updated: Jul 06 2025 at 12:14 UTC