Could you please check that the following would be an idiomatic way of formulating the hello world exercise in exercism? They require the tests to be separated from the code, so I put the code in a module, and the test in an app:
# HelloWorld.roc
module [hello]
hello = "Hello, World!"
and here's the corresponding test script:
# hello-world-test.roc
app [main] { pf: platform "https://github.com/roc-lang/basic-cli/releases/download/0.12.0/Lb8EgiejTUzbggO2HVVuPJFkwvvsfW6LojkLR20kTVE.tar.br" }
import pf.Task exposing [Task]
import HelloWorld exposing [hello]
expect hello == "Hello, World!"
main =
Task.ok {}
They would just need to run roc test hello-world-test.roc
, and off we go?
A message was moved here from #ideas > Learning Roc: with an AI mentor, and/or with exercism.org by Anton.
Could you please check that the following would be an idiomatic way
Looks good @Aurélien Geron :)
How are we tracking with the Exercism language track? I think this will be really nice to have, and am interested to know what the plan/way forward is.
@Aurélien Geron do you need help with this? Is it something we could splut up into smaller tasks? Do the exercism team want to see a minimum number or problems or something before they add Roc as a language track?
The first two exercises are almost ready.
@Luke Boswell , I was busy the last couple of weeks, but I found a bit of time to finally get the ball rolling. The hardest part was understanding how an Exercism track is structured and what's required to get it to work. Now that the structure is there, things should be much faster.
I've got two exercises ready, and the second one took me maybe 10 minutes to add.
Anyone can help by adding an exercise using the same structure as the other ones and then submitting a PR. The more exercises we have, the better.
That said, I still have one important task left to do: finish the roc-test-runner
, basically a Docker container that the website can use to test the user's code. I've started it but it's not quite done yet.
Also, it would be nice to write a generator: that's a tool which can read the exercise specs (written in JSON) and convert them into Roc tests for each exercise. This will simplify writing the tests for each exercise.
@Anton , thanks for all your help. Could you please Approve this review so I can merge the PR?
https://github.com/exercism/roc/pull/3#pullrequestreview-2245342898
basically a Docker container that the website can use
In case you were not aware, these should be a good starting point.
@Anton, yes these are the ones I used. :+1:
I'm running into a little issue with the roc-test-runner: the Docker container does not have network access, so it cannot download the basic-cli platform. The only solution I can see is to add it to the Roc cache when building the Docker image. This involves downloading the .tar.br, checking its hash, and uncompressing it. I wish there was a roc download <url> -o /some/path
command to make this easier. Is there?
There is no command like that but you can wget the hello world example, do roc helloWorld.roc
and then all those steps are done as well
Oh good point, thanks
I implemented a basic roc-test-runner in https://github.com/exercism/roc-test-runner/pull/4
Yikes, the tests passed on my machine, but they seem to be failing with this error on github:
thread 'main' panicked at crates/cli/src/lib.rs:571:10:
called `Result::unwrap()` on an `Err` value: DlOpen { desc: \"/tmp/.tmpKLdU2k/app.so: failed to map segment from shared object\" }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
That's really odd. Any idea what could cause this?
The test is compiled into a shared library in a temp folder and then opened by the roc compiler and executed
Not sure why that is failing, but that is the error source here
According to this SO answer, it might be a permissions issue: perhaps /tmp
is not mounted properly. It should be mounted with sudo mount /tmp -o remount,exec
.
I checked, it looks like /tmp
is writeable after all, so unless Roc is trying to write somewhere else, this should not be the problem.
I'm stumped, some help would be really appreciated. Here's the full RUST_BACKTRACE. It's not very helpful...
thread 'main' panicked at crates/cli/src/lib.rs:571:10:
called `Result::unwrap()` on an `Err` value: DlOpen { desc: \"/tmp/.tmp3qpj0O/app.so: failed to map segment from shared object\" }
stack backtrace:
0: 0x5600dfb5049b - <unknown>
1: 0x5600dedd53f0 - <unknown>
2: 0x5600dfb4bd03 - <unknown>
3: 0x5600dfb50234 - <unknown>
4: 0x5600dfb51d90 - <unknown>
5: 0x5600dfb51aaf - <unknown>
6: 0x5600dfb522ae - <unknown>
7: 0x5600dfb521b2 - <unknown>
8: 0x5600dfb50996 - <unknown>
9: 0x5600dfb51f14 - <unknown>
10: 0x5600deccae75 - <unknown>
11: 0x5600deccb383 - <unknown>
12: 0x5600df10c296 - <unknown>
13: 0x5600defaf330 - <unknown>
14: 0x5600defa2543 - <unknown>
15: 0x5600defa2563 - <unknown>
16: 0x5600dfb41a1a - <unknown>
17: 0x5600defb3595 - <unknown>
18: 0x7f31500221ca - <unknown>
19: 0x7f315002228b - __libc_start_main
20: 0x5600ded68a6e - <unknown>
21: 0x0 - <unknown>
I'm not sure how to reproduce this. Is this just spool up the Docker container? I can have a look later tonight on my dev machine.
Thanks for your help!
I'm not sure I understand what you mean by "Is this just spool up the Docker container"?
Basically, everything works fine when I run bin/run-tests-in-docker.sh
on my machine: it builds the Docker image and runs the bin/run.sh
script inside the container, and all the tests pass.
However, when Github Actions runs this same script (when you push anything to the main
branch), the container seems to be built fine, but I get the error above while the bin/run-tests-in-docker.sh
runs, every time it calls roc tests ...
.
I've checked a few things on Github Actions:
uname -a = Linux 72fc66ec05dd 6.8.0-1012-azure #14-Ubuntu SMP Mon Jul 29 21:12:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
whoami
= root
/tmp
has read-write access.TMPDIR
is undefined (setting it to /tmp
fixes nothing)A couple ideas:
/tmp
is writeable but there are some limitations, for example perhaps you can't delete anything in /tmp/foo
?I tried adding --profiling
to roc test
to get more debug info in the RUST_BACKTRACE, but I saw no difference.
to get more info in the backtrace, you would need a debug build of the roc compiler
Thanks @Brendan Hansknecht. Do you have a URL I can download it from?
Or do I need to build Roc from source?
I think you would have to build from source
I don't think we publish it anywhere
Ok thanks, I'll try that.
Another possible difference between my computer and the Github Actions host is perhaps the amount of available RAM, but I doubt that running roc test
would use up too much RAM.
I've just tried creating a directory in /tmp, adding a file, appending to it, deleting the file, then deleting the directory, and everything worked fine, so it doesn't look like a permission issue on /tmp after all.
Oh wow, I compiled Roc from source using the latest code on the main
branch, and I uploaded the roc
binary here and made the test runner use that instead of the binaries on github.com/roc-lang/roc, and I was hoping to see a detailed stacktrace but instead... the test runner works just fine now!
I'm not sure I'm happy that it works now, because I'm no closer to understanding what's wrong with the official release. :sweat_smile: But at least this unblocks the roc-test-runner with a temporary hack.
As a side-note, the debug version prints deprecation warnings about backpassing because <-
is used in Task.roc
, so I need to either figure out how to fix that or filter out deprecation warnings. Perhaps I should upgrade to basic-cli 0.13?
@Brendan Hansknecht, what could cause the debug version to work and the release version to fail? Perhaps they're not exactly based on the same code? Or perhaps they were built on different platforms against different library versions? I built the debug version on Debian bookworm in a Docker container on my Macbook.
Honestly not sure. Maybe @Anton would have a guess
I wouldn't expect source or debug to make a difference for roc test. And I don't think the code in that part of the compiler has changed any time recently
@Aurélien Geron we just fixed the bug that has been holding the nightly releases up for a while now, and is why we've been talking about TESTING releases.
I think that means we should be good to put together a new nightly for both roc and basic-cli and that will include a lot of fixes too, which aren't in the current releases.
I imagine this will happen when Anton is back online sometime tomorrow evening our time, or maybe some time Monday/Tuesday and he can upload the binaries etc.
I would recommend just working with your release for testing (for now) and we should be able to switch to a nightly soon.
I cant see any obvious issues with the test runner -- it is strange that the latest nightly isn't working. But it sounds like the later versions of the compiler are ok.
Thanks @Luke Boswell , that's great news, I was going completely nuts! :sweat_smile:
The current test runner is really brittle, as any little change in the test output will cause the test runner to break. For example, any deprecation warning or any formatting difference. It will be hard to maintain as Roc evolves. In the long run, it would be great to have a kind of plugin system as Richard suggested, as I expect this will be easier to maintain. But in the short term, is there a way I can exclude warnings from the test output? If not, it would be nice to be able to set the desired level of verbosity (the --verbose
option is not that granular).
Great, the roc-test-runner tests in PR #4 finally pass using the latest Roc code (I compiled Roc from source and uploaded the debug and release versions here).
I just submitted PR #6 which adds a test generator (adapted from Python's generator). Now that we have a roc-test-runner, a test generator, and a couple working exercises, I think we're ready to start adding many exercises.
Note: the test generator is not user facing, so I don't think it's an issue that it's written in Python, it was much faster adapting the Python one rather than writing it from scratch in Roc. We can always port it to Roc in the future if we want.
The docker host is Ubuntu 24.04
We don't test yet with 24.04 so I recommend sticking to 22.04 for now. I know Luke tried it sometime and could not get llvm 16 set up with it.
Perhaps I should upgrade to basic-cli 0.13?
0.14 will be out soon with the backpassing warnings resolved
what could cause the debug version to work and the release version to fail?
Are you sure both were built using the same commit?
I think that means we should be good to put together a new nightly for both roc and basic-cli and that will include a lot of fixes too, which aren't in the current releases.
:check:
Thanks @Anton . Indeed both debug and release built from the latest source worked.
I just added three easy exercises: bob, darts, and difference-of-squares. If anyone wants to help me add more exercises (it's fun!), here's how:
pip3 install -r requirements-generator.txt
all-your-base
bin/add-exercise all-your-base
and follow the instructionsBtw, my solutions to the three exercises may not be idiomatic or optimal, please feel free to suggest improvements.
Heads up, you can use 'a'
as a char literal that translates to its respective ASCII code. So you can implement isLower
and whatnot more reabably:
isLower = \c ->
c >= 'a' && c <= 'z'
I don't think that's in the tutorial yet, it should probably get added
Cool, thanks Sam
I'm working on the reverse-string
exercise. If it's an ASCII string, then I can just convert to a UTF8 List and reverse it, but I'd like the code to handle any Unicode strings, so I'm using roc-lang/unicode.
I'm using Grapheme.split
, reversing the list, then using Str.joinWith ""
. Does this seem like the appropriate approach to you?
Side-note: there's a bug with Grapheme.split
: it fails on empty strings (I filed issue #15).
Oh boy...this is where I have to remember my Unicode...I think reversing graphemes is incorrect. Cause sometimes multiple graphemes merge together to create a single printed character
Grapheme clusters...yay
I thought a grapheme was a single printed character, and multiple codepoints (aka unicode scalars) can combine to form one grapheme?
Personally, I would limit to ASCII for the simplicity of the exercise and teaching
Ah ok, it would definitely make my life simpler. You're probably right. I think Exercism lets you offer multiple solutions, so perhaps I could have a simple ASCII one and a more complex Unicode one?
Also, for our Unicode library, I'm not sure if a grapheme is the same as an extended grapheme clusters as defined here: https://www.unicode.org/reports/tr29/
If it is then reversing it would be reasonable.
But I think graphemes can merge to form a grapheme clusters. And a grapheme clusters is the closest thing to a printable unit (which is roughly what humans think of as a character)
Ah ok, I thought graphemes and grapheme clusters were the same thing. My understanding was:
101 = code point for the letter e
e = a grapheme, composed of a single codepoint 101
é = a grapheme, which can be represented either by a single code point 233, or by a couple 101 (e) + 769 ( ́ )
Perhaps the term "grapheme cluster" specifically refers to the graphemes that are actually composed of two or more codepoints. I'll look into this.
"A grapheme cluster represents a horizontally segmentable unit of text, consisting of some grapheme base (which may consist of a Korean syllable) together with any number of nonspacing marks applied to it."
https://unicode.org/glossary/#grapheme_cluster
But I don't really understand
My mental model is still not fully clear here
Thanks. I think that in my example, 'e' is a grapheme base, and 'é' is a grapheme cluster if it is represented as 101 + 769
So I guess the question is, is a grapheme the same thing as a grapheme clusters then? Just different terms that map to the same thing kinda. One being more specific of what it comes of.
That's my understanding, yes.
Ok. Then yeah, probably good.
To be precise, I'd say:
That makes sense.
Thanks for pointing this difference out, I thought they were synonyms. :+1:
I guess I had initially thought of it as a Grapheme clusters was composed of graphemes and a grapheme could be a base or extra mark....but that is wrong....just naming makes it confusing.
Your description looks to be correct from what I can tell.
The docs of roc-lang/unicode seem to define Grapheme
as "Extended Grapheme Clusters". What's that "Extended" qualifier now!? :sweat_smile:
It just distinguishes it from legacy grapheme clusters
Not that I know the underlying details of the difference
ChatGPT to the rescue! :grinning_face_with_smiling_eyes:
image.png
It looks like Extended Grapheme Clusters (EGCs) are just an extension of the concept to thinks like flags, emojis, and more.
So back to the original problem of reversing a unicode string, it looks like Grapheme.split
is the correct way to go (if I'm going to offer a Unicode solution on top of the basic ASCII solution)
Thanks Brendan! :thank_you:
Yes
Also, as I am reading more of the Unicode docs, I am realizing more and more how terminology/definitions we are using (and chat gpt is using) are subtle off or wrong. Nothing that matters for this example, but lots of underly nuance and history.
A very long time ago (around 2008), a friend of mine wrote a massive Unicode book, and I helped him review it. There were thousands of important details everywhere. Language turns out to be incredibly complex and varied, and Unicode is a massive beast that is really hard to tame (sorting, right-to-left, capitalisation, marks, spacing, normalisation, and so much more).
One interesting note, a lot of asian scripts like those used in Thailand and India require extended grapheme clusters to be parsed. So they aren't something special to emoji. In fact the Unicode standard actually mentions that extended grapheme clusters should really just be called grapheme clusters, but they added extended to distinguish from old versions of the standard. Also if you reverse Hindi text with grapheme clusters, I'm pretty sure it would be reversing syllables, which is kinda intriguing. To be fair if you reversed the "characters" it would lead to a really strange mess of likely invalid sounds being chained together.
Anyway, I can stop side tangenting now.
Good to know, thanks!
I would like to contribute thoughts on this, I'm interested to know the answer. My mental model is that it's fine to just reverse because the extended frapheme cluster includes all the relevant parts. But I'd definitely need to read up on it again.
Yeah, that's the conclusion we came to
I've submitted the exercise in PR #11, your feedback is most welcome.
Note that Exercism's test cases for this exercise do check unicode strings, so I had to use the unicode
package. However, I included the reverseAscii
function as well, for information.
Very roughly:
Graphemes are visual units that generally would be considered "characters" by people. Extended grapheme clusters are just an implementation detail of how graphemes are formed and extracted from a Unicode string.
I got the full unicode test suite passing, but I am very sure there's bugs in there. I left a crash so it's easy to find them. I was going to fuzz it but got distracted. It'd be great to write a python script or something to throw a large body of unicode things at it and reduce it down to any errors. Shouldn't be too hard to resolve, but I'm pretty sure there's a few in there. Just a lot of ege cases to handle, and it's pretty mechanical.
Grapheme.split
fails on empty strings for some reason, I can't figure out why by looking at the source code, what it does on empty strings looks perfectly fine, so perhaps it's something to do with Lists with capacity 0 or something. I filed an issue in unicode
The good news is that all the tests for this exercise pass (except for the empty string, which I had to handle separately) so the unicode
package seems pretty healthy overall.
Haha, that error is literally Grapheme.split
just missing the base case. It always assumes there is at least one code point to be splitting out. (Next, [], [])
is the state machine, but it expects (Next, [cp], _)
I think we just need to add a case something like:
(Next, [], _) -> acc
Yeah, that seems to fix it. Just the one extra line.
https://github.com/roc-lang/unicode/pull/16
Note: someone should probably go through and remove all the backpassing to get rid of warnings
Sure, here you go: https://github.com/roc-lang/unicode/pull/18 :smile:
Working on the gigasecond exercise, which involves manipulating dates, datetimes, and iso formatting. Is there a recommended package for this? I found https://github.com/Hasnep/roc-datetimes and https://github.com/imclerran/Roc-IsoDate.
Any recommendations?
Ok I've used Roc-IsoDate, which had all that was needed for the exercise. :+1:
We're up to 10 exercises now! :grinning:
@Aurélien Geron new release of https://github.com/roc-lang/unicode/releases/tag/0.1.2 for you
With that fix
Thanks Luke! :+1:
And we're up to 20 exercises now! :grinning:
https://github.com/exercism/roc/tree/main/exercises/practice
You'll find my solutions in the .meta/Example.roc
file in each exercise folder. I'd appreciate it if you could take a look and give me some feedback on what could be improved to make it more idiomatic. Feel free to submit PRs too. Thanks! :thank_you:
Awesome work @Aurélien Geron ! I'm taking a look at adding the acronym exercise
Exercism was the way I originally learned functional programming so it is very cool to have a track in progress for Roc :smiley:
PR for Acronym: https://github.com/exercism/roc/pull/34/files
@Aurélien Geron it would be good to update the existing tests to start declaring the result as an intermediate definition so that when there are failures they get printed out by the test runner for the user:
expect
result = abbreviate "Portable Network Graphics"
result == "PNG"
@Isaac Van Doren , I updated all the exercises to define the result on a separate line in the expect statement, the output is much more useful, thanks for the tip!
Awesome!
Status update
I've added support for Roc in:
module
and import
statements.I've also implemented the test.yml
workflow in exercism/roc, to automatically test all exercises when someone submits a PR.
Now I'm requesting a track logo.
After that, we should be ready to go live! :tada:
We now have 28 exercises, thanks to @Isaac Van Doren for contributing several of them! :+1:
For the track logo, I provided a link to the official logo, but they ask to list the attribution requirements, and rights given for use of that logo. Does someone know the answer to this?
I think it was just made by @Richard Feldman
It would probably be a good idea to add the logo to https://github.com/roc-lang/design-assets and to add a LICENSE file to that repo. The roc-lang/roc repo uses the UPL license, but for assets like logos, it might be preferable to use something else, since you might want to avoid people just (ab)using your brand. I'm not lawyer though.
Yeah, I've brought this up before, the logo is trademarked, so I'm not sure what the license should be
The Roc track is enabled for testing!
If you'd like to take a look and help me test before we launch it publicly, please let me know, I just need to know your github login and I'll ask the Exercism admins to add you to the testers.
Happy to test, I'm smores56
I can test also! isaacvando
Thanks Sam & Isaac, much appreciated!
I'm not sure how long it will take them to add you to the testers, but once they've done so you will have access to the Roc track at: https://exercism.org/tracks/roc
(until then you'll get a 404 error)
The online editor works nicely, and the test results look good:
image.png
It misses syntax highlighting, however. I'll work on that. I've already added syntax highlighting for the static code (simply using Haskell syntax highlighting), but dynamic code is a different beast.
It would also be nice to have a few of the usual editor shortcuts, such as typing Ctrl / or Cmd / to toggle commenting.
And this is what a solved exercise looks like:
image.png
Happy to test: bsassoli
Thanks @bernadino!
@Aurélien Geron I'm curious if you have a perspective on how many exercises should be added to the Roc track right now. From a user's perspective, having as many as possible seems nice, but that also increases the maintenance burden when breaking changes are made in Roc. That being said, maybe the number of breaking changes will be low since none of the exercises run effects.
@Isaac Van Doren , I would just add as many as possible, as you said I don't expect that many breaking changes (except perhaps in exercises that use external libraries like roc-lang/unicode or imclerran/isodate.
Plus, once you've upgraded a few exercises, you get the hang of it and it becomes much faster. For example, I recently upgraded roc-lang/unicode to use ?
instead of <-
for backpassing, and that took me a while, but after that I upgraded imclerran/isodate and it was much faster.
Just to be sure, which of the following is considered more idiomatic in Roc?
Err "Input must be positive"
Err InputMustBePositive
Err NonPositiveInput
Err NonPositive
Exercism's exercise specifications is represented in JSON, and the errors are specified like this:
{"error": "The input must be positive"}
and right now the Roc test generator convert this object to this:
Err TheInputMustBePositive
Wdyt?
Sounds good!
Err InputMustBePositive is great
There’s definitely a preference for using tags rather than strings for errors, but I don’t think there’s much of a convention about how the tag names are detemined
Screenshot-2024-08-29-alle-16.43.54.png
Awesome @bernardino, good job! :grinning:
@Sam Mohr , @Isaac Van Doren , @bernardino , I heard back from Erik at Exercism.org: he said that in order to add you to the testers, you need to create an Exercism account and link it to your github account. You also need to agree to be added to the Exercism organisation on github (you will get notifications by default, but you can silence them if you want).
Could you please do that if you're still interested in testing the Roc track, and let me know your exercism login name when it's done? Thanks! :thank_you:
I'm still interested! My exercism handle is @smores56, and my GH account is said to be linked in my account settings on their site. I don't see an invite to their GH org...
My exercism handle is isaacvando and it is linked with GitHub also. I also don't see an invite for the GH org, but if you mean I just need to acknowledge that I agree to being added to the org before they add me, I agree to be added :check:
Same here my account is bsassoli and is already linked with gh
We now have beautiful syntax highlighting in the online editor! :heart_eyes:
image.png
The editor is CodeMirror 6, and we just reused the Elm syntax highlighter for now, to get started, which means that a few things are not highlighted properly (see the screenshot). The main issue is the comments: Elm uses --
instead of #
, so comments look bad.
I looked into creating a Roc plugin for CodeMirror, but it looks a bit tricky. Perhaps the simplest option would be to adapt the Elm plugin, but it was designed for CodeMirror 5, and things look quite different in CodeMirror 6. That said, Elm's plugin works, so there must be a way. Would anyone like to help with this?
Amazing progress :heart_eyes: thank you very much @Aurélien Geron and everyone else involved!
There’s definitely a preference for using tags rather than strings for errors
I think we use only a tag too often in Roc. In a case like this:
aliquotSum = \number ->
if number <= 0 then
Err OnlyPositiveIntegersAreAllowed
...
I think this is better:
aliquotSum = \number ->
if number <= 0 then
Err BadNumber "Argument number needs to be larger than 0, but it was $(number)."
...
Often including some context makes it a lot easier to figure out the cause. You can't include context without a Str.
Yes that’s a good point
Not that it is too important to this discussion, but I think the most correct thing to push for is a tag with extra data and a function to convert it to a string
BadNumber WasNotPositive number
Then an error to string method
Avoids perf costs of converting to string if a user doesn't need it
Is descriptive
Can create the pretty error string if needed
It's also less effort at the error creation site
Hmm, I do like to have the full error message (in the code) right where it happened. That avoids needing to jump around between the two.
Avoids perf costs of converting to string if a user doesn't need it
Is perf so important in this branch? I could see you'd want to avoid it during a DDOS but on the other hand I would also like to log errors that happen on my webserver with a nice message.
I think it's a good technique in part because it makes the errors more testable
you have a semantic record of all the relevant info about what went wrong, which isn't brittle to changes in user-facing wording! :smiley:
Anton said:
Is perf so important in this branch?
I think it heavily depends. If the error is just expected to bubble to the top of the program, no big deal. If the error is in utf8 conversion from list to string and is likely to be used for more decisions, it definitely matters. Also, in the utf8 case, the string error would almost certainly be the wrong error meesage for the end user.
I think for me, it is more about the mindset to avoid paper cuts all over the codebase. But I also value tags for better UX to a programmer to interact with the value
I feel like BadNumber WasNotPositive number
is equally readable to a programmer as the full error string
I got it...best error :rolling_on_the_floor_laughing:
Err (BadNumber Argument Number Needs To Be Larger Than 0 But It Was number)
Haha
Though in all reality, you could technically do this and it would be fast (not saying to actually do it):
Err (BadNumber "Argument number needs to be larger than 0, but it was" number)
That first one might even compile down to U64
I see no problems here
How about this?
For perf sensitive things:
Err (BadNumber number \nr -> "Argument number needs to be larger than 0, but it was $(nr)")
Otherwise:
Err (BadNumber number "Argument number needs to be larger than 0, but it was $(number).")
Seems to tick all the boxes
Not a style I would personally use, but it would be functional.
With only a small perf cost due to lambdasets not boxing
It's good at localizing the error message to the code, but passing around handlers for the code your running is oddly coupling.
This isn't great if you have multiple different locations where the error is returned.
How so?
If I do this for missing keys/values in a dict, for example, then why the missing value is an error will change with context
Yeah, you can discard the original error message if you want and print or pass your own
ensureLoggedIn = \user ->
if isLoggedIn user then
Ok {}
else
errorCode = 403
Err (NotLoggedIn errorCode \code -> "User was not authorized: $(Num.toStr code)")
If I give that to my users, they're probably not gonna be in the practice of writing a better error message, they'll say "good enough"
It's why Roc doesn't have an Option type, right? Force people to come up with a descriptive reason failure occurred
Though sometimes it doesn't matter, I just want something I can throw ? at
I just want to give code examples that imply that we should default to good "code health" practices
Yours is good, but I think it's more effort but better to expect the user to contextualize errors
I think it's more effort but better to expect the user to contextualize errors
I don't know, people are lazy :p
I've regularly written Stdout.line (Inpsect.toStr err)
when writing Roc code just for me and I think that could be a very common approach.
I have a suspicion that keeping the error message close to the place where the error happened could turn out important in practice. Like keeping docs close to the code they're about.
I'm down to try it! Roc makes it much easier than usual to keep the two things together
Currently, basic-cli
's Task {} [Exit I32]_
type incentivizes set-and-forget propagation, if this works out maybe we change it to Task {} [Exit I32, StdoutErr Stdout.Err]
Interesting, yeah incentives are important :)
I definitely prefer just using tags. They play nicely with LSP when I'm handling them later, I can just hover and see the full error. If I've wrapped it at eash level where I didn't handle it, it's easy to see the flow.
I also lean towards long and descriptive tag names too.
Yeah, there's not really a cost to long names, we have auto-complete
Sam Mohr said:
Currently,
basic-cli
'sTask {} [Exit I32]_
type incentivizes set-and-forget propagation, if this works out maybe we change it toTask {} [Exit I32, StdoutErr Stdout.Err]
we actually changed to this to facilitate this workflow:
main = ...
and not annotate anything and write a script, and errors will automatically crash
and print what the error wasmain : Task {} [Exit I32]
- and now all errors have to be handledand that one type annotation is something you'd normally add anyway when making a program robust
Oh, cool!
Is there somewhere that this behavior could be added for discoverability?
The basic-cli README is basically just a version summary list
I think it is really important to remember that this and most roc code today lives in the world of small scripts. Lazy errors are often what is wanted for small scripts with quick iteration cycles. I think this is correct and just printing the error tag is totally reasonable for that.
I agree
For more robust code, I think more information tags and enabling users to contextualize is important. I think that is what many libraries will do as they get more robust.
I'm just thinking about all of the Python tools out there that started as small projects, and in growth from rising popularity, never shook off their scriptness
I'm wondering if there's a way to prevent Roc programs from running into that by incentivizing app authors to not avoid handling errors
Very true, but I think it is at least somewhat less likely in roc. I think there will be stronger cultural norms around error handling and propagation due to it being explicitly in the types and more common in general in functional langauges
Also, I don't think it is about the app authors. More the library authors for setting this norm
Yes, at least in Roc this is mostly an issue at the main
function level, within the app stuff should get handled
Sam Mohr said:
Is there somewhere that this behavior could be added for discoverability?
We have an roc-lang/example article on error handling. This could be discussed there.
It probably needs a bit of polish as the world has changed a nit since we wrote that.
Maybe we coukd make an issue and link to richards explanation as a TODO for including in the next revision of that example.
yeah my general thinking is:
Interesting discussion!
For what it's worth, I really like Err WasNotPositive number
because:
I'm not a big fan of passing an English string or a function to convert the error to an English string. It's too verbose to be sprinkled everywhere and it's English-only: what about other languages?
More generally, English text seems like something you want to handle at the program boundary, much like converting a UTC date to a localized string for presentation to the user. Returning an error with an English string feels very much like returning a localized date from a library function: it should really return a UTC date and let the code at the user boundary handle the localization. If that makes sense...
Localization is another super important note for why tags over strings
Good extra context
I'd love your feedback on the solution to the minesweeper exercise. I'm not super satisfied with it, I'd love to see what more experienced Roc developers would do. Thanks!
https://github.com/exercism/roc/pull/62/files#diff-6880507fc33caf686b54948260b85fae05addd72ead91541dd73ed3d86988e9a
Hi @Aurélien Geron Just FYI I'm still getting a 404 on https://exercism.org/tracks/roc (but Erik did add me on gh)
For what it's worth, I really like
Err WasNotPositive number
because:
I'd like to be a little more descriptive: Err NumberArgWasNotPositive number
. I think if we don't have a Str message we should encourage a proper description of what went wrong. This should make it easier for users to work around an error if the programmer did not convert the tags into nice error messages.
Aurélien Geron said:
I'd love your feedback on the solution to the minesweeper exercise. I'm not super satisfied with it, I'd love to see what more experienced Roc developers would do. Thanks!
https://github.com/exercism/roc/pull/62/files#diff-6880507fc33caf686b54948260b85fae05addd72ead91541dd73ed3d86988e9a
I'm no expert, but my approach was to find all the mines up front
sweep : List (List U8) -> Set (I32, I32)
sweep = \minefield ->
List.walkWithIndex minefield (Set.empty {}) \mines, row, y ->
List.walkWithIndex row (Set.empty {}) \rowMines, cell, x ->
when cell is
'*' -> Set.insert rowMines (Num.toI32 x, Num.toI32 y)
_ -> rowMines
|> Set.union mines
countNeighbourMines : Set (I32, I32), (I32, I32) -> U64
countNeighbourMines = \mines, (x, y) ->
List.countIf
[
(x, y - 1),
(x + 1, y - 1),
(x + 1, y),
(x + 1, y + 1),
(x, y + 1),
(x - 1, y + 1),
(x - 1, y),
(x - 1, y - 1),
]
\pos -> Set.contains mines pos
annotate : Str -> Str
annotate = \str ->
minefield = Str.split str "\n" |> List.map Str.toUtf8
mines = sweep minefield
List.mapWithIndex minefield \row, y ->
List.mapWithIndex row \cell, x ->
if
cell == '*'
then
'*'
else
pos = (Num.toI32 x, Num.toI32 y)
when countNeighbourMines mines pos is
0 -> ' '
bombs -> bombs + '0' |> Num.toU8
|> List.keepOks Str.fromUtf8
|> Str.joinWith "\n"
@Anton , there are currently 7 exercises that have test cases for errors. Here are the errors I propose to use for each of them:
Err ValueWasNotFound
-- (or just Err NotFound
like List.findFirst
?)Err NumberWasNotPositive
Err SquareWasNotBetween1And64
Err StrandsWereNotOfEqualLength
Err NumberWasNotPositive
Err InputWasNotAPlanet
Err QuestionHadASyntaxError
and Err QuestionHadAnUnknownOperation
We could add the input arg's value as an extra payload, but I think that in this context it will just complicate things for the users. Perhaps we could add an example or two in a few exercises just to show that it's possible (e.g., for QuestionHadAnUnknownOperation
we could add the unknown operation).
I could also add Arg
after the arg name in each case except ValueWasNotFound
, for example Err NumberArgWasNotFound
or Err SquareArgWasNotBetween1And64
.
So just to be clear how do you want us to test the existing exercises? Once we pass the tests how should we update ? Or is it a fallback kind of thing where we signal only when we think something is off?
I like it with Arg
:)
but I think that in this context it will just complicate things for the users
Can you share how it will complicate things?
When the inputs & expected output are short, then it's no big deal, but when it's strings (especially long or multiline strings), it will make the tests a bit hard to read. That said, the only exercises with strings and error test cases so far are hamming, space-age, and wordy, and none of them have very long strings for the error cases, so we're probably fine.
But suppose that the minesweeper exercise had an error test case. Right now, here's what a test case looks like, just imagine if the errors also carried a copy of the input:
# large minefield
expect
minefield =
"""
·*··*·
··*···
····*·
···*·*
·*··*·
······
"""
|> Str.replaceEach "·" " "
result = annotate minefield
expected =
"""
1*22*1
12*322
·123*2
112*4*
1*22*2
111111
"""
|> Str.replaceEach "·" " "
result == expected
Oh actually it wouldn't be much different... so you're right! :smile:
At what point in the exercism tracks are the examples shown to the user? Is it after submission?
I assumed that they were available in the "community solutions" section after you submit a correct solution, but I checked and it doesn't seem to be there. In fact I haven't found the example solutions anywhere on the site, so I guess they're just used to test the tests (i.e., ensure that the tests can pass). I'll double-check with the exercism team.
bernardino said:
Hi Aurélien Geron Just FYI I'm still getting a 404 on https://exercism.org/tracks/roc (but Erik did add me on gh)
Erik from Exercism said: "All three are invited to the GitHub team. Once accepted, they should be able to access the track."
Can you please ensure you've accepted the GH invitation and try again?
It looks to me like we all are part of the GitHub team https://github.com/orgs/exercism/teams/roc
I also don't see any option to further confirm anything in GitHub
Can you please ensure you've accepted the GH invitation and try again?
I am pretty sure i did
Thanks for your feedback, I've asked Erik from Exercism (here: https://forum.exercism.org/t/enabling-the-roc-track-for-testing/12726/11)
@bernardino , @Isaac Van Doren , @Sam Mohr , @Anton : could you please review the pending exercism/roc PRs when you have a minute? There are 3 new exercises, and 1 PR to make the error handling more idiomatic: as discussed above, the errors now look like Err (NumberArgWasNotPositive -123)
rather than Err OnlyPositiveIntegersAreAllowed
or Err "Only positive integers are allowed"
.
Reviewed! :smiley:
Let me give a look as soon as I get chance :)
Hi everyone,
Earlier in this thread, we had a discussion about errors. @Anton advocated for errors like Err (NumberArgWasNotPositive -123)
:
I really like that so I implemented it for all exercises in PR #64.
However, @Isaac Van Doren argues (in this PR's comments) that a different approach to error handling by the user's solution should not count as the solution being incorrect.
I agree that some proportion of users will be frustrated by the fact that their solution fails because their error is named Err InvalidNumber
and the expected error is Err (NumberArgWasNotPositive -123)
.
I personally love Roc's error management, IMO it's one of the great features of the language. I think it's great to show it off, and encourage users to return descriptive errors with all the payload an error handler may need.
But I'd like to know what others think! Here's a little poll:
/poll What should we test for in case of errors?
Minimal: expect result |> Result.isErr
Detailed: expect result == Err NumberArgWasNotPositive -123
Something else (please comment below)
I also noticed that some test cases can be removed using better type annotations. For example, there's no need to test whether a number is negative if it's U64
. I'm submitting a PR now to add annotations to all exercises.
Note that in solution 1 (minimal) the users will start with annotations like this:
answer : Str -> Result I64 _
answer = \question ->
crash "Please implement the 'answer' function"
while in solution 2 (detailed) they will start with this:
answer : Str -> Result I64 [UnknownOperation Str, SyntaxError Str]
answer = \question ->
crash "Please implement the 'answer' function"
I like the more descriptive errors, but I feel like it may be too much for some people depending on what stage their at. So my vote is keeping the expectations easier, but in the answer we show the gold standard.
That makes sense. Perhaps our approach could change depending on the exercise difficulty: for easy exercises, be lenient, but for medium or hard exercises, be demanding. Wdyt?
I added an option in the poll for this mixed approach.
Please review PR #65 which adds annotations to all exercises.
Yeah, I think we shouldn't push any specific error handling on the user even for harder exercise
This has one core caveat. If the interface needs to distinguish specific types of errors to be usable, we need to enforce the error union types
I think error handling is pretty subjective. For example, I don't think I would ever write a NumberArgWasNotPositiveError
tag. It is way too verbose in my opinion.
I think it would be bad style to enforce that exact error
On the other hand, for something with actionable errors that the caller is expected to respond to. Like on Dict.get. error specification may be required.
That said, as long as you can be explicit in the problem definition about the exact errors the user is required to return, it isn't terrible either.
The poll looks like a tie, but there seem to be stronger feelings against the detailed approach than against the minimal approach, so unless the poll changes over the next few hours, I'll update the PRs to go for the minimal approach. Vox populi, vox dei! :grinning_face_with_smiling_eyes:
In fact I haven't found the example solutions anywhere on the site, so I guess they're just used to test the tests (i.e., ensure that the tests can pass). I'll double-check with the exercism team
Did they get back to you about this @Aurélien Geron?
I don't think I would ever write a
NumberArgWasNotPositiveError
tag. It is way too verbose in my opinion.
My main justification for this verbosity is that there is a significant chance that users of Roc software (in general) would only get a tag as error info. Because the author did not want to spend time on nice error messages, so they just did Stderr.line (Inspect.toStr err)
.
@Isaac Van Doren , @Sam Mohr , @bernardino , @Anton : good news, Erik just fixed the issue that was preventing you from testing the Roc track, you should now be able to visit https://exercism.org/tracks/roc without getting a 404.
Anton said:
Did they get back to you about this Aurélien Geron?
Errrr... I had forgotten to ask, I just did, sorry about that.
No problem
Aurélien Geron ha scritto:
Isaac Van Doren , Sam Mohr , bernardino , Anton : good news, Erik just fixed the issue that was preventing you from testing the Roc track, you should now be able to visit https://exercism.org/tracks/roc without getting a 404.
Yup! Working! Thanks @Aurélien Geron
Screenshot-2024-09-03-alle-13.21.53.png
I’m in! :smiley:
Anton said:
I don't think I would ever write a
NumberArgWasNotPositiveError
tag. It is way too verbose in my opinion.My main justification for this verbosity is that there is a significant chance that users of Roc software (in general) would only get a tag as error info. Because the author did not want to spend time on nice error messages, so they just did
Stderr.line (Inspect.toStr err)
.
I think that is more a byproduct of the fact that roc is mostly used for small apps and quick scripting today. As roc apps grow this will become less and less common. I think we can in general agree that Stderr.line (Inspect.toStr err)
is bad design for any program that wants to be robust and user friendly. I think as apps grow in general, no matter how specific the error for a single function, it will be too small in scope to be the right level of information to expose to the end user. As such, I don't prefer making it verbose. I think less verbose hopefully pushes people in the right direction of needing to add context and color for their specific app.
These are all super short examples/challenge problems. I'm not sure what we are optimizing for, but I think either is more or less equivalent as long as there are solid doc comments for the user to go off of.
@Anton , Erik just confirmed that the .meta/Example.roc
files are only used to ensure the test cases work well, they don't appear anywhere on the website.
Not necessarily a big app yet, but in my webserver I've been finding have multiple layers or nesting can be really helpful. Like I've wrapped it with a tag each time. I can quickly narrow down what was happening that caused the error.
Interesting. Are you saying that you wrap errors inside errors? For example, suppose an Err (A 123)
is handled by some function, it could return Err (B (Err (A 123), 456))
? And the next error handler might return Err (C (Err (B (Err (A 123), 456)), 789))
?
Just on my phone rn, but I can grab an example later.
Ok, here is an example from https://github.com/lukewilliamboswell/roc-htmx-tailwindcss-demo
If I delete the SQL db so it doesn't exist and then startup the server and hit an endpoint. The server responds with a http 500, and prints the following to Stderr.
2024-09-03T23:13:00Z Get /
500 Server Error (SqlErrGettingSession (SQLError ERROR "no such table: sessions"))
Love it! Nested errors FTW!
Yeah, they can be great for layering on more context. Though have to be careful not to nest too deep
Also, SQLError is a slightly special cause cause it depends on the sqlite c++ to get the actual message.
Would be nice if it could have a tag union in the inner most case, but we don't have proper wrapping currently. So just the c++ generated string.
I don't have a good example, but sometimes I'll add a new feature and forget to handle an error or something and it bubbles all the way up to main. It's super convenient to see the chain.
In fact I find I'll sometimes run the app just to see the error and remind myself what the next thing I need to implement is. I'll often just leave a Task.err TODO
and it's pretty obvious when that happens.
I did some testing of the track and it is working very smoothly! I submitted exercises via the CLI and online editor. I also requested mentoring for one of the exercises to test that flow out. @Aurélien Geron Do you know what we need to set up to do mentoring?
Oh great, I hadn't thought of trying out mentoring. I'll give it a shot and get more info from the Exercism team if needed.
It looks like you need to sign up for mentoring: https://exercism.org/mentoring
I'll try that now
Oh... Roc's not there yet, that's what you meant!
I actually hadn't looked at the mentoring page yet, I just tried requesting mentoring on an exercise. Looks like I'm already signed up to be a mentor. Maybe we can't test that part until the track goes live
BNAndras from Exercism answered this here:
Until the track goes live, you can’t sign up as a mentor for the public queue since there isn’t one really. I’d try making direct mentoring request links and sending them to one another. Those are automatically generated under the code review tab of the exercise’s page (Exercism as an example of where to look). The URLs look like
https://exercism.org/mentoring/external_requests/<long_uuid>
. You send the link, the other party opens it, and they can accept your request. That starts a mentoring session with you and from there on, it’s no different from the usual mentoring workflow.
So here's a mentoring request from me for the collatz-conjecture exercise. :smile:
We've reached 40 exercises! :tada:
roc.svg
image.png
And we now have a Roc track icon. That's the last item on the To-Do list before launch, so I think we're very close!
Just tried out that mentoring link and it worked well!
Yes, thanks, I saw your feedback, it's great! I updated my solution, and commented back, could you please take a look?
It seems to me that everything is running smoothly, if you all agree, I think I'll ask Erik whether we can launch the track now. Unless perhaps we want to prepare a little bit of publicity around this? E.g., prepare a post for social media, ensure that roc-lang.org points to exercism.org, etc.
Yep, I just replied to your comment. Exercism is such a nice platform :smiley:
The track seems ready to launch to me! It would be good to have some publicity around it. Perhaps a tweet from the roc lang twitter account? Maybe someone should write a blog post?
@Richard Feldman
sure! any ideas for what I should post? (I haven't used exercism so I'm not sure what would be best to share!)
We could make an appeal for those who like learning by doing
anyone want to write up a draft?
Do we want to do a tweet or a blog post or both?
if anyone wants to write a blog post I'd be happy to share it!
I was thinking I might upload a short YouTube video. However I'm not sure I'll have time until the end of next week.
Synchronizing our posts will have much more impact, but I don't want to hold you up. WDYT?
I think it's helpful to wait and take our time. There's no rush, and it gives us time to resolve any issues that may pop up.
I think we could launch the track though. Like a soft launch I guess.
I can write a blog post!
I also like the idea of soft launching the track now and then coordinating posts next week
Sounds good. I confirmed with Erik, it's just a matter of flipping a switch in the config.json. I'll submit a PR now.
https://github.com/exercism/roc/pull/77
Approved!
It's live! Woohoo!
Everyone should be able to see the track now: https://exercism.org/tracks/roc
Awesome! :star_struck:
Looks great :)
Not fully relevant but felt worth sharing: https://exercism.org/blog/september-2024-restructure
Looks like exercism is hitting tough times financially.
Yeah I saw that yesterday, it is very sad. It seems like they’ve had financial difficulties for the past two years or so
Which is a good reminder to donate if you like the platform and are able!
Using the unit tests make me wonder: are there any plans to add titles/descriptions to expects? Are there any plans to provide a way to run single tests without commenting out code?
I don't think there is anything concrete currently around that
I know that expects print the comment before them
That kinda works for at least identifying the tests better
Alex Nuttall said:
any plans to add titles/descriptions to expects? Are there any plans to provide a way to run single tests without commenting out code?
for the titles/descriptions, just an optional comment right before the test that shows up in the test failure.
for the run single test, we haven't really talked about it but I think we should do that. maybe expect.only
or something like that?
expect.only
would be for a test that runs once for a suite, I presume? I think that the want to "run single tests without commenting out code" implies they want to filter on the name of the test, like cargo test foo
Which we could enable by allowing substring matching on the comments above tests
so I like being able to say in the actual code "only run these tests" and/or "skip these tests" and then have the test runner say "test run incomplete" when they're all passing, and then list the ones that were skipped and exit with a nonzero exit code
that way you get whatever level of control you want, but also you can't accidentally check in a bunch of skips because it'll fail the build
Okay, sure. You usually want to run:
expect.only
would allow in the way you're describingroc test path/to/Module.roc
roc test main.roc
So I think adding what you're describing would be a good addition. Maybe only allowed for top-level expects?
I think separately it is really useful to be able to select/skip tests via the cli. Can really speed up the development loop to minimize the tests that run.
As much as doing the testing via a CLI is fast, I think doing it in the editor is almost as fast nowadays
But yes, doing it via file editing is more money out of the weirdness budget
oh I think both are separately reasonable
Comments as descriptions works well enough, but I feel it's almost like introducing a decorator when (optional) descriptions should just be first class citizens in the test framework
At work I use Jest so .only
and xtest(...
are second nature to me. Substring matching is good, but presumably wouldn't work well for tests without descriptions, and there is a bit of mental overhead in selecting and typing the right substring
It seems like a string argument a la Zig would be what you're looking for, then?
expect "two plus two equals four"
sum = 2 + 2
sum == 4
It doesn't seem distracting or out of place to me. Not sure if we should require it, but requiring it would incentivize giving nominal context to tests.
I feel like I remember richard commenting that he didn't like require names cause often times names are redundant with test content and just extra noise.
Since Roc loves tags, why not use optional expect tags instead of strings?
expect [Math, Integers]
sum = 2 + 2
sum == 4
then:
roc test --only Math
And we could probably easily add a roc test --list-features
that would let users discover what to add features for
There are also no spaces in tag names, which makes them a bit more cleanly searchable from a CLI perspective
I've submitted PRs for 4 new exercises: if someone can review them , we'll reach 50 exercises on exercism.org! :big_smile:
General problem design question: Do we have the ability to choose the input type for a roc exercise on exercism. As a direct example: https://exercism.org/tracks/roc/exercises/resistor-color
It has a Str as a input. In roc, I think a Tag would be the most reasonable input for that problem. It would be the best practice suggestion.
Yep we can! You can use any type as long as you can transform the json that describes the test cases into the correct type in the tests. I agree that that exercise should use tags
Well... I think tags don't currently have encoding support
We use jinja templates to generate the tests so that doesn’t matter here
Okay great
Yes, in retrospect I should have used tags, good catch. I'm still quite new to Roc, so I hope I didn't make too many mistakes like this one.
One or two exercises using strings isn't that bad I feel like. It teaches people to parse. But the function should have to return a Result
in this case
I'm still quite new to Roc, so I hope I didn't make too many mistakes like this one.
Nothing to worry about! Plus, you pretty much willed the track into existence overnight which is fantastic :big_smile:
Hi Roc'ers!
I just submitted a draft PR for a new exercise about error handling. For this exercise, there was practically no guidance from Exercism, it's basically up to the track authors to come up with the idea, as long as it covers error handling.
Could you please take a look and tell me what you think
Thanks! :thank_you:
I'll check it out :)
Apparently it's currently not possible to use the try operator ?
inside an expect
statement, it crashes the compiler. I filed issue #7081. If anyone can take a look at it, I would really appreciate it, as it would make the Exercism test cases much easier to write and nicer to read. Thanks! :thank_you:
For example, I'd like to write:
expect
white = create? "B4"
black = create? "F6"
result = white |> queenCanAttack black
result == Bool.false
But I have to write:
expect
maybeWhite = create "B4"
maybeBlack = create "F6"
result =
when (maybeWhite, maybeBlack) is
(Ok white, Ok black) ->
white |> queenCanAttack black
_ -> crash "Unreachable: B4 and F6 are both valid squares"
result == Bool.false
As you can see, it makes the test case quite long and hard to read. It's unclear that the tested function is queenCanAttack
.
Isaac and I are pushing new exercises like there's no tomorrow! :grinning_face_with_smiling_eyes:
We've just overtaken Perl, haha!
Adding exercises is definitely addictive :big_smile:
We discussed a blog post or article to launch the Exercism track. Just wondering how that plan is progressing? @Isaac Van Doren I'm guessing you're having fun smashing out exercises... :smiley:
I’m about 70% done with the writing itself, but I have procrastinated it. Once I finish thwt I need to turn it into HTML. Hoping to finish it by this weekend
@Aurélien Geron are you still thinking of recording a YouTube video?
Sadly I'm not sure I'll be able to do this right now, but perhaps in a few weeks.
Alright, no worries either way!
Here's my Exercism blog post :smiley:
https://isaacvando.com/roc-exercism-forth
@Richard Feldman are you still up for sharing the blog post out?
for sure! :smiley:
I'll tweet about it today
Sweet!
Great post Isaac, super clear and convincing. And thanks for the kind shout-out! :folded_hands::blush:
Shared on Reddit roc_lang
And also r/programming
Aurélien Geron said:
Great post Isaac, super clear and convincing. And thanks for the kind shout-out! :folded_hands::blush:
I agree, it's a nicely written article. I like how @Isaac Van Doren you have highlighted the error handling in a really concrete way. Not just thrown a bunch of terminology out there, but really walked the reader through why it's useful.
Thanks folks! Glad to hear it turned out well :grinning_face_with_smiling_eyes:
Wow, 3.4k views already apparently from r/programming
I'm guessing that's how many times it's appeared in someone's feed and not clicks.
Yeah there have definitely not been that many page views yet haha
Next time I’ll have to create some kind of image for the link preview so it pops more
I could add something simple in an hour or so but Reddit probably wouldn’t regenerate the link preview either way
I wonder if there's some kind of service or webpage that evaluates your pages SEO and gives it a score or makes recommendations like add an image etc. Maybe that might include a hint like adding an image or something. :thinking:
There probably is
Lighthouse gave me a 100 on the page for SEO so there actually isn’t anything that can be improved :laughing:
Screenshot 2024-09-28 at 8.43.06PM.png
Now with a preview :smile:
Isaac Van Doren said:
Here's my Exercism blog post :smiley:
https://isaacvando.com/roc-exercism-forth
i came here for commenting it that forth is not only parsing by space that simple, you might better to prepare a function named nexttoken which accept a delimitter as parameter. for eg in forth the word S" need to pair with " and between these symbols, there could be space as expected
I think exercism is greatly simplified problems. That would definitely be important for a robust forth interpreter though.
Right, the parser in my solution is definitely not robust at all, I just went with a naive approach that is enough to solve the exercise
We're up to 75 exercises. There are 7 more exercises waiting to be reviewed, if anyone has a minute or 2. :thank_you:
One of them (pov
) is blocked by issue 7108.
Another (robot-name
) is blocked by a core dump in Github Actions just after downloading roc-random
. Everything works well on my machine, I'm not sure how to debug this:
[...]
double free or corruption (!prev)
bin/verify-exercises: line 31: 3695 Aborted (core dumped) roc test "${test_file}"
Error: Process completed with exit code 134.
<strike>It's trying to run roc test download-dependencies.roc
.</strike>
Oh no it's not, it's actually trying to run ./bin/verify-exercises robot-name
(which runs roc test exercises/practice/robot-name/robot-name-test.roc
after replacing RobotName.roc
with .meta/Example.roc
). This works fine on my machine (both on my Macbook directly, and inside a Docker container), so I'm not sure how to debug this.
Oh actually it works well on my Macbook directly, but it does not work inside the docker container (I was running the wrong test). I'm getting this error:
Verifying robot-name exercise...
corrupted double-linked list
./bin/verify-exercises: line 31: 23 Aborted (core dumped) roc test "${test_file}"
New release of roc-random, I upgraded the examples to basic-cli 0.15.0 and merged a PR I hadn't noticed from @Fabian Schmalzried (thank you and sorry I missed it).
I did the List Ops
exercise. I understand, why this is a good exercise for most functional languages, but I think it is not a good fit for Roc.
The assignment is, to write List functions like concat, filter, map, fold etc, but you are only allowed to use List.prepend
from the List module.
This makes sense for a language, where a List is a linked list. By writing this functions, you learn, how the List functions work under the hood.
But in roc, Lists are not linked lists. They are arrays/vectors (or how ever you want to call them). When you callList.prepend
, every element in the list has to be copied. This is not, what the List functions do in Roc under the hood.
I would propose, that the exercise is changed, that only List.append
is allowed. The students should also be recommended to use List.withCapacity
What do you think?
Great suggestion, IMO!
@Oskar Hahn , good point! Would you like to submit a PR to update .meta/Example.roc
and .docs/instructions.append.md
for the Roc-specific instructions? If you don't have time I'm happy to do it, no pb.
I created the PR: #128
A message was moved from this topic to #ideas > Exercism: Learning Track by Luke Boswell.
All open PRs are now approved and or merged :smiley:
I also opened a PR to bump the roc-random version in the test runner
It looks like we've implemented a few deprecated exercises: binary
, octal
, hexadecimal
and accumulate
. The first three have been replaced by all-your-base
and the last was replaced by list-ops
. I've submitted PR #138 to mark these exercises as deprecated: this will remove them from the UI except for people who have already done these exercises. I've also submitted PR #139 to output a warning when trying to add a deprecated exercise.
So the bad news is that we're losing 4 exercises, but the good news is that there are 8 new exercises ready to be reviewed by anyone who has a few minutes to spare! :sweat_smile:
Status update:
image.png
I just realized there's a build page with some stats: it looks like there are 125 Roc students so far.
And we reached 100 exercises, woohoo!!! Almost as many as students, haha! :sweat_smile:
STOP WRITING, WE'VE PEAKED
4005 submissions, wow!
Yes, that's impressive, over 30 per user on average. I'm not sure what this counts: if you try and fail, is that a submission? If so, it's less impressive, but still nice to know people are playing with Roc
Aurélien Geron said:
if you try and fail, is that a submission?
I think that would count as a submission. I’ve submitted multiple times on some exercises, as I make iterative improvements.
But yeah still cool!
I've submitted PR #163 to add the zebra-puzzle
exercise. My solution works, but I'm not too happy with it, it feels really long and boring. It would be great if other people gave this exercise a shot, perhaps there's a more elegant way to solve it, that I missed. Here are the instructions:
Your task is to solve the Zebra Puzzle to find the answer to these two questions:
- Which of the residents drinks water?
- Who owns the zebra?
## Puzzle
The following 15 statements are all known to be true:
1. There are five houses.
2. The Englishman lives in the red house.
3. The Spaniard owns the dog.
4. The person in the green house drinks coffee.
5. The Ukrainian drinks tea.
6. The green house is immediately to the right of the ivory house.
7. The snail owner likes to go dancing.
8. The person in the yellow house is a painter.
9. The person in the middle house drinks milk.
10. The Norwegian lives in the first house.
11. The person who enjoys reading lives in the house next to the person with the fox.
12. The painter's house is next to the house with the horse.
13. The person who plays football drinks orange juice.
14. The Japanese person plays chess.
15. The Norwegian lives next to the blue house.
Additionally, each of the five houses is painted a different color, and their inhabitants are of different national extractions, own different pets, drink different beverages and engage in different hobbies.
To ensure your solution is correct, check that the Norwegian drinks water, and the Japanese owns a Zebra.
Reading the instructions, I felt like I had to define tags for each field (Drink
, Animal
, Color
, Activity
, and Nationality
), and I'd still love to see a solution that does this elegantly, but it made my code ugly, so I dropped the idea.
Thanks for your help!
This feels like a problem meant for prolog
I'm updating the exercism repo to use the new splitting functions now, mentioning so that someone else doesn't do it at the same time.
I've done a LOT of exercism: Elixir, Rust, Smalltalk, Zig, and more. I don't know if Exercism is a great experience for a language like Roc that is still SO in flux.
It really has not been a problem so far. It helps that the exercises are all pure so the amount they are impacted by language changes is smaller. The biggest downside I see is that public solutions may become out of date quickly. It seems like a lot of people have enjoyed using the Exercism track and it has been a great resource to point people to, so I’m glad that we have it.
Last updated: Jul 05 2025 at 12:14 UTC