for an upcomming project, I am interested in the memory use of web servers. We think that we can do much better in this area with roc.
We want to build a roc platform that can be used to build web servers. We'll use the platform's ability to decide how allocation works to provide extra guarantees about memory usage. Each request gets a fixed amount of memory to produce its response. If it exceeds this limit, that specific request will return error 500. All others are unaffected. This per-request memory limit is combined with a total memory limit (hence limiting concurrent requests). That means you also statically know peak memory usage, because it is a constant (peak memory usage is average memory usage). So you lose flexibility, but get reliability.
But, I don't run any production web servers, so it would be very helpful if anyone here that does could provide some statistics like
fwiw I think this is basically how erlang works with memory limits defined per process so maybe that could be used as reference for what is used by default or recommended by frameworks like phoenix
Last updated: Jul 06 2025 at 12:14 UTC