I wonder to what extent we can optimize parser combinators long-term - given enough inlining of closures, for example, could we theoretically get them to be approximately as fast as a parser written without any closures at all?
this is not so much about the performance ceiling of Roc itself but rather whether we could get to a world where people can use parser combinators without having to think "I am sacrificing performance for code that's easier to work with"
https://www.sciencedirect.com/science/article/pii/S0167642317302654 seems potentially relevant
I definitely would need to dive into the internals to understand what it would generate if we inline everything. Cause it may be a case where there is a lot of work that llvm won't be able to optimize away due to memory interactions.
I'm assuming we're doing our own inlining of closures, which is something I think we'll definitely want to do in the future
yeah, but still some important questions on what is actually generated and how deep the nesting is. Cause we have to be careful not to inline too much cause it might apply in the wrong places.
Last updated: Jul 06 2025 at 12:14 UTC