I'm trying to isolate a bug that is causing the roc app to crash using my experimental tui platform. It's tricky to isolate the bug as I haven't found a good way to capture information when Roc panics. Is there are good way to capture the information or debug something like this? I guess it may be complicated by the fact I am capturing stdout to render the terminal ui. I feel like a debugger would assist, but I'm not sure that will work when I am also rendering text to the terminal.
Any assistance here would be most appreciated. I'm really close to having a working Todo app using Roc tui!
Model : {
text : Str,
todos : List Str,
scroll : U16,
bounds : { height : U16, width : U16 },
}
# Handle events from the platform
update : Model, Event -> Model
update = \model, event ->
#newTodos = List.append model.todos model.text
when event is
KeyPressed code ->
when code is
Scalar char -> { model & text: Str.concat model.text char } # adding too many characters in the model's text also crashes
Delete | Backspace -> { model & text: removeLastCharacter model.text }
Left -> Model.updateScroll model Left
Right -> Model.updateScroll model Right
Up -> Model.updateScroll model Up
Down -> Model.updateScroll model Down
Enter ->
{ model & text: "", todos : ["newTodos", "asd"] } # appending anything to todos crashes
_ -> model
Resize newBounds ->
{ model & bounds: newBounds }
_ ->
model
Screen-Shot-2022-11-28-at-21.28.06.png
Screen-Shot-2022-11-28-at-21.26.18.png
can you try the RUST_BACKTRACE=1
suggestion
that should give a better stacktrace for the first panic
most likely some invalid string makes it into the rust code
but it's useful to know where it comes from
the second one is trickier, certainly on mac
but they might be the same issue in reality
otherwise I can try it on linux if you have a branch (and this is supposed to work on linux)
It should work ok linux :grinning_face_with_smiling_eyes: I should definitely try that next time. RUST_BACKTRACE didn't give me much. It's just a segfault now. My current logic is that I ha e changed the Model without completely regenerating glue, and because my workaround for glue has the Model in the plaftorm API that may have changed the memory layout. I'll try regenerating it again tomorrow, and I'll have a go with my linux machine. :+1:
on linux, valgrind
is a good way to figure out if something is wrong
It's on the tui branch. I will likely follow basic-cli's example and pull out into its own repo soon.
I think I am getting closer to finding this bug. Can someone more experienced with Rust have a look at the below and let me know if there is anything wrong here? Rust analyzer is happy, but I think there must be something going on in the KeyCodes I'm getting from crossterm and then translating to RocStr's.
crossterm::event::KeyCode::Char(ch) => {
let string = String::from(ch);
let roc_string = roc_std::RocStr::from(&string[..]);
crate::glue::KeyCode::Scalar(roc_string)
}
Scalar char -> { model & text: Str.concat model.text char } # if I remove the Str.concat then this works properly
It's strange behaviour, because it works well for 23 characters, and then crashes on the 24th. Without the Str.concat
in the model you can type characters in indefinitely. The other thing I was thinking is that maybe the text
RocStr in the Model is only a fixed size and doesn't expand to fit enough characters in it.
that makes sense in a way: we store strings from 24 characters and up in a different way
Cool, I'll dig into glue a bit more and see if I can figure it out.
I pulled your branch and am trying to take a look, but immediately, I hit a crash when trying to compile. Do you see this on your system?
cargo run -- examples/tui/hello.roc
Finished dev [unoptimized + debuginfo] target(s) in 0.49s
Running `target/debug/roc examples/tui/hello.roc`
thread '<unnamed>' panicked at '`pf.Elem.IdentId(109)` is inserted as a partial proc twice: that's a bug!', crates/compiler/mono/src/ir.rs:197:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Update: no idea why the compiler is panicking here. I just commented out the debug asserts and it compiles fine :shrug: (guess I could have just built in release mode)
As for the crash. It is a use after free bug.
The text is being freed and then rendered.
This is probably something that glue generation should more properly fix, before render, you want to increment the refcount of the entire Model
. Currently the Model
is passed into render and essentially consumed. Thus, the next time it is used, the string is pointing to freed memory and hits bugs quickly.
With current roc, I think there are two simple fixes.
unsafe {
std::mem::forget((*model).text.clone());
std::mem::forget((*model).todos.clone());
};
This is a essentially no-op that just increments the refcount and should otherwise have no major cost.
Oh, I guess the third option would be to expose a function that given a model returns the model twice in a record. That would increment the refcount of all fields when called. Then just std::mem::forget
one of the models and keep the other.
I think changing render to return the model is the way to go, because it means morphic can optimize away unnecessary refcounts better
but importantly, I don't think you want to change the user-facing API at all
just have an internal-only wrapper around render which calls the application-provided render, and then returns to the host a record containing both what was rendered as well as the original (unchanged) model it was given
that way it shouldn't try to decrement the refcount!
I don't think that would change anything. Both cases when passing the model into the render function, we are holding a reference which means we will increment the refcount of everything it contains. I don't think morphic would have any wins here. Am I missing something it can do?
Thank you for looking into this. I really appreciate it. I've got some things to work on now. For reference I should have mentioned that I've moved into a new repo roc tui, but its still the same issue on that branch.
Last updated: Jul 06 2025 at 12:14 UTC