Hello!
I'm working on adding an alternate record update syntax (issue #7097) and had a few questions.
the new syntax is { field: newVal, ..oldRecord }
{ oldRecord & field:newVal }
syntax or is the new syntax considered an experiment?{ ..oldRecord, field: newVal }
valid? or even { field1: val1, ..oldRecord, field2: val2 }
Also, I wanted to state my understanding based on the conversation from the #ideas > new dot-dot syntax for list interpolation, open types, etc. thread, that the current syntax is semantically equivalent to the previous &
syntax.
And yeah, exact same semantics
I think this could work like in JS, i.e. all combinations are valid:
{ field: newVal, ..oldRecord }
meaning: (oldRecord
would override field
if it has one){ ..oldRecord, field: newVal }
meaning: (newVal
will be assigned always to field
){ ..evenOlderRecord, ..oldRecord, field: newVal }
{ ..evenOlderRecord, field: newVal, ..oldRecord }
Also combination of all sorts, similar to:
like https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax
I don't think we planned to follow js here
That would be a whole other discussion on flexibility
Original discussion and plan was to work exactly like current record update syntax
So {field: newVal, ...oldRecord}
will take oldRecord
and overwrite field
with newVal
Roc currently is not flexible enough to support all of JS record syntax above (I don't think we plan to make it that flexible, but totally could be done).
The big thing is that roc does not allow expanding records (and expanding records is bad for perf). So I don't think we should support that.
I guess we could make it order dependent overwritinh as long as all used records have the exact same fields
Would it be reasonable to only allow a single spread record in the initial version of this syntax, with order-independent semantics?
If we format the AST so that the spread ..record
appears at the end of the expression, I don't think there will be any ambiguity over the effects of {field:newVal, ..record}
But if updates are order-dependent, then { field: _, ..record}
now evaluates to record
which may be unexpected.
Yeah, all sounds good
I run into a situation, where I want to change the type of a record and the size and alignment of the new record is the same as the old record.
I think it would be possible to implement an record update, that is not bad for performance. If the size and alignment is the same, then Roc could in place manipulate the record.
If an inplace mutation is not possible, then I think the argument, that a copy is so bad, is an weak argument. If it would be true, then Roc should also not support List.map
. When I see it correctly, then List.map
has the same performance problem. It has to do a copy, if the old size is not the same as the new size. For example in 0u8 |> List.repeat 100 |> List.map Num.toU64
.
On the other hand, the ergonomic of an update syntax is massive. If you have a record with many fields you currently have to change the type of one field, then it is very annoying to write out all of the 20 fields.
The performance of writing out all fields is probably worse then an update-syntax, because Roc can probably not make the inplace mutation, if the new size of the record is the same.
Maybe you could reconsider your argument and think about adding support of changing the type of a record with the update syntax?
Oskar Hahn said:
The performance of writing out all fields is probably worse then an update-syntax, because Roc can probably not make the inplace mutation, if the new size of the record is the same.
You should consider this and record update syntax the exact same thing. They both clone the entire record. They will only become in place mutations if llvm can recognize it is safe via pointer alias analysis.
If the type is changed, most likely the struct will be laid out differently in memory. I am fairly certain that will guarantee a copy and will never see reuse.
Personally, I think we should only allow for updating the type if we also allow for expanding records when doing updates. They are equivalent in my brain. They don't change just the data. They also change the type which may require larger relayouts and such.
I assumed that was the discussion at hand as well
in fairness, recreating individual records is usually cheap... it's only when doing it repeatedly that it adds up
Yeah, perf is generally not an issue until either the record gets big, the updates happen a ton, or the minor size paper cuts lead to more stack overflows
Some of these issues seen with Task and closure captures (which are essentially records and got relayout a lot)
yeah we can probably relax some of these restrictions, but now seems like the wrong time to prioritize it
requires significant changes to both type checking and code gen
I agree, that this is not a priority. It is an annoyance, but no blocker at all. With the current syntax, you have to type more, but you can do anything.
Last updated: Jul 06 2025 at 12:14 UTC