Ray tracing is an embarrassingly parallel problem. So I created a platform in Rust that uses Rayon to call in parallel the render function implemented in Roc. It gave me a 10x speedup on my M1 Max.
The book uses two different object-oriented interfaces for Ray-Object intersection and Materials. I first started modeling the same with Abilities. It was a fun exercise and I got far but the codegen would panic once things got more complex. I switched over to simple tagged unions and I think it is a much better solution anyway.
Currently I multisample 500 times per pixel and generate a final .ppm image file. I will soon modify the host to make the multisampling happen inside a GUI "in real time" and maybe even port this to wasm on the web. I also intend to implement an elm-style {init, update, render} platform to clean things up and make the platform reusable.
I have now implemented an elm-style {init, update, render} platform. The init is called with the host provided {width, height, x,y} values for every pixel in parallel (using Rust's Rayon). Then the update and render functions are repeatedly called (again in parallel) to update the state and draw the screen. https://github.com/shritesh/raytrace.roc/blob/main/platform/main.roc
I am calling this learning exercise "complete". It was really fun and helped me understand where Roc excels at and the current limitations. The clean separation of platform and application is a huge effing deal and changes the way you think about programming. I'm doubling down on Roc+Rust being my default stack of choice.