Precedence inside a function call - raku

Using the defined-or operator ( // ) in a function call produces the result I'd expect:
say( 'nan'.Int // 42); # OUTPUT: «42»
However, using the lower-precedence orelse operator instead throws an error:
say( 'nan'.Int orelse 42);
# OUTPUT: «Error: Unable to parse expression in argument list;
# couldn't find final ')'
# (corresponding starter was at line 1)»
What am I missing about how precedence works?
(Or is the error a bug and I'm just overthinking this?)

I'd say, it's a grammar bug, as
say ("nan".Int orelse 42); # 42
works.

TL;DR My super useful naanswer (not-an-answer / non-authoritative answer / food for thought) is it might be a bug or it might not. :)
Other examples:
say(42 and 42);
say(42 ==> 99);
yield the same error.
What am I missing about how precedence works?
Perhaps nothing. Perhaps it will be desirable and possible to fix the grammar so these function-call-arg-list-signifying parens determine precedence just like plain expression parens do.
If so, perhaps fixing it would best wait, or perhaps realistically must wait, until when or after RakuAST lands (6.e?). Or perhaps even later, lf/when grammar cleanup/slangs lands (6.f?).
Or perhaps it's going to always stay as it is for reasons such as good usability (despite the initial "huh?") and/or expediency and/or single-pass parsing and/or whatever.
I've dug a little to see if I could find relevant commentary. Here are some (juicy?) bits:
the OPP is a bit more complex than a standard binary-operator OPP
(from a comment on #perl6)
If you scroll backwards from Larry's comment you'll see he said this in the context of Raku's extraordinary seamless parsing (no delimiters introduced) in a single pass of nested sub-languages that each can have arbitrary grammars.
(Btw, one thought I had: did std parse say(42 and 42) fine? I'm not sure if there's a running std anywhere these days.)
While we do have complete control of stock Raku, I'm not convinced there's anything compelling about bending over backwards to fix every wrinkle of this sort (foo(... op ...) in this case) when the general case (..... where the middle ... inside the outer pair of .s has arbitrary syntax) means we'll be hitting limits in how "perfect" it can all be when there's a huge amount of anarchic language / syntax mixing going on in userland/module space, as I anticipate will emerge in years to come.
So, imo, if it's reasonably easy to fix, without unduly cramping or burdening user slang freedom, great. If not, I think the current situation is fair enough (though perhaps it'll be desirable, viable and reasonable to improve the error message).
Perhaps consider the foregoing in combination with:
Raku borrows many concepts from human language ...
(from the doc)
in combination with:
☞ Self-clocking code produces better syntax error messages
(from Seeing Wrong Right)
in combination with:
Break that clock and your error messages will turn to mush
(from a mailing list comment)
But then again:
Please don't assume that rakudo's idiosyncracies and design fossils are canonical.

Do you mean this, maybe...?
> say ( NaN.Int orelse 42 )
42
since
> say( NaN.Int orelse 42 )
===SORRY!=== Error while compiling:
Unable to parse expression in argument list; couldn't find final ')' (corresponding starter was at line 1)
------> say( '42'.Int⏏ orelse 42 )
expecting any of:
infix
infix stopper
I would tend to agree with #lizmat that there is a grammar bug in the compiler.

Related

Raku: how do I make an argument optional, have a default, with a where test?

Can't find a way to get this to work:
sub triple(Str:D $mod where * ~~ any #modifiers = 'command' ) { }
If I don't pass in an argument, I get an error:
Too few positionals passed; expected 1 argument but got 0
With a question mark after $mod:
sub triple(Str:D $mod? where * ~~ any #modifiers = 'command' ) { }
I get:
Constraint type check failed in binding to parameter '$mod'; expected anonymous constraint to be met but got Str (Str)
Looks like it may have been a precedence problem. This works:
sub triple(Str:D $mod? where (* ~~ any #modifiers) = 'command' ) {}
TL;DR You've identified the problem in your answer -- precedence -- and provided a solution. This answer covers what happened; why the precedence issue arises; why Raku's grammar/parser didn't just get it right; and lists some solutions, a couple of which I'll start with.
Instead of:
sub triple(Str:D $mod? where * ~~ any #modifiers = 'command' ) { }
I suggest moving the any and writing one of these:
sub triple(Str:D $mod? where * ~~ #modifiers.any = 'command' ) { }
sub triple(Str:D $mod? where #modifiers.any = 'command' ) { }
What happened
The = ... at the end of the where clause is parsed as an assignment (to #modifiers) instead of as a default value (for $mod):
#modifiers = 'command' is evaluated, overwriting whatever values #modifiers had.
The any creates a junction with one element ('command').
Now the only argument that triple will accept is 'command'.
Why the precedence issue arises
Raku's grammar is designed to have nice ergonomics. This includes design details that reduce the need for parens and braces. Overall these design details yield a big net win. But there are wrinkles, and you've encountered one.
Raku lets one write where ... to specify a where clause without requiring that one uses an explicit braced lambda ({...}) for the ... bit. One can even create a lambda using just a *. Nice! But where does the lambda end? If you use explicit braces, it's clear. If not, what determines the end of the lambda?
More generally, shouldn't the parser just know that the = in = 'command' is not part of any lambda? That it should instead just automatically finish a where clause if there is one before parsing the = ... part? That the = ... should always be parsed as a default value for a parameter?
One can easily see the ambiguity (once one's attention is drawn to it), and so does/could Raku's grammar/parser. It just needs to resolve that ambiguity either by rejecting such syntax, demanding the coder explicitly disambiguates (eg with parens, as you've done), or by choosing which way to parse.
What Raku's grammar/parser does in the face of the ambiguity is choose. And it chooses wrong. (Unless of course one wanted it to be an assignment of some value on the ='s left, not a default for a parameter, though that's gotta be pretty unlikely.)
Why Raku's grammar/parser didn't just get it right
Why doesn't the parser reject this code as being too ambiguous, or be smart enough to choose the "it's a default" interpretation? It certainly could -- the Raku grammar/parser feature is Turing complete, equivalent in power to the unrestricted grammars category in the Chomsky parsing hierarchy -- so why doesn't it just get it right?
In a nutshell, it gets it right the right amount, at least imo. But that's subjective, oddly worded, and vague, so it's probably not a satisfactory summary. So I'll try provide a bit more detail in the hope it's more informative.
Every Raku design decision is discussed openly, and there are searchable public records of essentially all of it. To dig into these discussions I recommend starting out with Liz++'s awesome IRC log service, and within the numerous channels listed, focusing on the #perl6 logs that ran from 2005 thru 2019 or so.
Although I've been around for many of the Raku design discussions of the last 20 years, I don't have a good recollection of all the discussions surrounding this decision about the ambiguity of meaning of a = ... at the end of a where clause, and what to do about it. And I haven't myself recently done the digging I suggest; for now I'll leave that for any interested readers. Instead I will outline what I think will have been some contributing factors:
Single pass parsing
Raku's "braided" approach to language design requires single pass parsing.
Longest parsing orientation
Longest Token Matching is all but essential for user definable braiding (see link in previous point) to be truly viable. LTM reflects a general principle that humans naturally tend toward recognizing the longest "token" (within reason of course). That is to say, if one sees $100, it strikes one cognitively as a hundred dollars, not as a dollar sign, a 1, a 0, and then another 0.
A similar deal applies to parsing of a string of tokens (again, within reason); if it weren't for the fact one learns to think of = ... as specifying a parameter's default, the #modifiers = 'command' would naturally be read as being an assignment into #modifiers.
Limited backtracking
Backtracking is slow, pathological backtracking utterly evil. So Raku's grammar/parser avoids potentially backtracking in all but three cases for which it really is the right solution, and entirely avoids any risk of pathological backtracking.
Handling ambiguity
While artificial languages can aim to expunge all ambiguity, the closer one gets to eliminating all ambiguity, the greater the amount of extraneous and distracting syntax one requires, such as frequent required use of delimiters (parens, braces, square brackets, etc) to ensure disambiguation. That makes a language increasingly unfriendly and verbose for that reason instead. Raku culture avoids ideological "foolish consistency" extremes.
Raku's designers (principally Larry Wall) considered all these factors and many more and arrived at Raku's solution:
Be rationally predictable
A sufficiently simple and predictable approach to parsing, and requisite sensitivity to the likelihood and costs of any surprises a user may encounter, goes a long way, and the design relative to where clauses is a case in point.
While the precedence issue may have been a surprise, and the error message unhelpful, I, er, predict you'll find your ERN signal regarding this will tune up quite nicely in fairly short order, just as it will for most of the things that might trip you up as you learn Raku.
Use predictive parsing
While there are several ways to accommodate all of the above, predictive parsing1 is a great choice, and -- not coincidentally! -- the one most naturally written using Raku grammars, and the one used for Raku's own grammar/parser.
Some other solutions
Here's what does not work as expected:
sub triple(Str:D $mod? where * ~~ any #modifiers = 'command' ) { }
^ Needs to be end of `where` clause
You've suggested a solution, and I suggested a couple at the start. Some more follow.
You used parens. Here are some other ways to use parens:
sub triple(Str:D $mod? where * ~~ any(#modifiers) = 'command' ) { }
sub triple(Str:D $mod? where * ~~ (any #modifiers) = 'command' ) { }
Or, switch to use of $_ (aka "it") instead of * (aka "whatever") inside braces:
sub triple(Str:D $mod? where { $_ ~~ any #modifiers } = 'command' ) { }
Footnotes
1 The Wikipedia page discusses "grammars" and "ambiguity" in a manner that may be confusing given that they are not used in the same way those words are used in the context of Raku and of this answer. But discussing that would be a rabbit hole inappropriate for this SO.

Regex/token/rule to match nested curly braces?

I need to match the values of key = value pairs in BibTeX files, which can contain arbitrarily nested braces, delimited by braces. I've got as far as matching at most two deep nested curly braces, like {some {stuff} like {this}} with the kludgey:
token brace-value {
'{' <-[{}]>* ['{' <-[}]>* '}' <-[{}]>* ]* '}'
}
I shudder at the idea of going one level further down... but proper parsing of my BibTeX stuff needs at least three levels deep.
Yes, I know there are BibTeX parsers around, but I need to grab the complete entry for further processing, and peek at a few keys meanwhile. My *.bib files are rather tame (and I wouldn't mind to handle a few stray entries by hand), the problem is that I have a lot of them, with much overlap. But some of the "same" entries have different keys, or extra data. I want to consolidate them into a few master files (the whole idea behind BibTeX, right?). Not fun by hand if bibtool gives a file with no duplicates (ha!) of some 20 thousand lines...
After perusing Lenz' "Parsing with Perl 6 Regexes and Grammars" (Apress, 2017), I realized the "regex" machinery (based on backtracking) might actually be a lot more capable than officially admitted, as a regex can call another, and nowhere do I see a prohibition on recursive calls.
Before digging in, a bit of context free grammars: A way to describing nested braces (and nothing else) is with the grammar:
S -> { S } S | <nothing>
I.e., nested braces are either an opening brace, nested braces, a closing brace, more nested braces; or nothing whatsoever. This translates more or less directly to Raku (there is no empty regex, fake it by making the construction optional):
my regex nb {
[ '{' <nb> '}' <nb> ]?
}
Lo and behold, this works. Need to fix up to avoid captures, kill backtracking (if it doesn't match on the first try, it won't ever match), and decorate with "anything else" fillers.
my regex nested-braces {
:ratchet
<-[{}]>*
[ '{' <.nested-braces> '}' <.nested-braces> ]?
<-[{}]>*
};
This checks out with my test cases.
For not-so-adventurous souls, there is the Text::Balanced module for Perl (formerly Perl 5, callable from Raku using Inline::Perl5). Not directly useful to me inside a grammar, unfortunately.
Solution
A way to describe nested braces (and nothing else)
Presuming a rule named &R, I'd likely write the following pattern if I was writing a quick small one-off script:
\{ <&R>* \}
If I was writing a larger program that should be maintainable I'd likely be writing a grammar and, using a rule named R the pattern would be:
'{' ~ '}' <R>*
This latter avoids leaning toothpick syndrome and uses the regex ~ operator.
These will both parse arbitrarily deeply nested paired braces, eg:
say '{{{{}}}}' ~~ token { \{ <&?ROUTINE>* \} } # 「{{{{}}}}」
(&?ROUTINE refers to the routine in which it appears. A regex is a routine. (Though you can't use <&?ROUTINE> in a regex declared with / ... / syntax.)
regex vs token
kill backtracking
my regex nested-braces {
:ratchet
The only difference between patterns declared with regex and token is that the former turns ratcheting off. So using it and then immediately turning ratcheting on is notably unidiomatic. Instead:
my token nested-braces {
Backtracking
the "regex" machinery (based on backtracking)
The grammar/regex engine does include backtracking as an optional feature because that's occasionally exactly what one wants.
But the engine is not "based on backtracking", and many grammars/parsers make little or no use of backtracking.
Recursion
a regex can call another, and nowhere do I see a prohibition on recursive calls.
This alone is nothing special for contemporary regex engines.
PCRE has supported recursion since 2000, and named regexes since 2003. Perl's default regex engine has supported both since 2007.
Their support for deeper levels of recursion and more named regexes being stored at once has been increasing over time.
Damian Conway's PPR uses these features of regexes to build non-trivial (but still small) parse trees.
Capabilities
a lot more capable
Raku "regexes" can be viewed as a cleaned up take on the unfolding regex evolution. To the degree this helps someone understand them, great.
But really, it's a whole new deal. For example, they're turing complete, in a sensible way, and thus able to parse anything.
than officially admitted
Well that's an odd thing to say! Raku's Grammars are frequently touted as one of Raku's most innovative features.
There are three major caveats:
Performance The primary current caveat is that a well written C parser will blow the socks off a well written Raku Grammar based parser.
Pay off It's often not worth the effort it takes to write a fully correct parser for a non-trivial format if there's an existing parser.
Left recursion Raku does not automatically rewrite left recursion (infinite loops).
Using existing parsers
I know there are BibTeX parsers around, but I need to grab the complete entry for further processing, and peek at a few keys meanwhile.
Using a foreign module in Raku can be a bit of a revelation. It is not necessarily like anything you'll have experienced before. Raku's foreign language adaptors can do smart marshaling for you so it can be like you're using native Raku features.
Two of the available foreign language adaptors are already sufficiently polished to be amazing -- the ones for Perl and for C.
I'm pretty sure there's a BibTeX package for Perl that wraps a C BibTeX parser. If you used that you'd hopefully get parsing results all nicely wrapped up into Raku objects as if it was all Raku in the first place, but retaining much of the high performance of the C code.
A Raku BibTeX Grammar?
Perhaps your needs do call for creating and using a small Raku Grammar.
(Maybe you're doing this partly as an exercise to familiarize yourself with Raku, or the regex/grammar aspect of Raku. For that it sounds pretty ideal.)
As soon as you begin to use multiple regexes together -- even just two -- you are closing in on grammar territory. After all, they're just an easy-to-use construct for using multiple regexes together.
So if you decide you want to stick with writing parsing code in Raku, expect to write it something like this:
grammar BiBTeX {
token TOP { ... }
token ...
token ...
}
BiBTeX.parse: my-bib-file
For more details, see the official doc's Grammar tutorial or read Moritz's book.
OK, just (re) checked. The documentation of '{' ~ '}' leaves a whole lot to desire, it is not at all clear it is meant to handle balanced, correctly nested delimiters.
So my final solution is really just along the lines:
my regex nested-braces {
:ratchet
'{' ~ '}' .*
}
Thanks everyone! Lerned quite a bit today.

SonarLint - questions about some of the rules for VB.NET

The large majority of SonarLint rules that I've come across in Java seemed plausible and justified. However, ever since I've started using SonarLint for VB.NET, I've come across several rules that left me questioning their usefulness or even whether or not they are working correctly.
I'd like to know if this is simply a problem of me using some VB.NET constructs in a suboptimal way or whether the rule really is flawed.
(Apologies if this question is a little longer. I didn't know if I should create a separate question for each individual rule.)
The following rules I found to leave some cases unconsidered that would actually turn up as false-positives:
S1871: Two branches in the same conditional structure should not have exactly the same implementation
I found this one to bring up a lot of false-positives for me, because sometimes the order in which the conditions are checked actually does matter. Take the following pseudo code as example:
If conditionA() Then
doSomething()
ElseIf conditionB() AndAlso conditionC() Then
doSomethingElse()
ElseIf conditionD() OrElse conditionE() Then
doYetAnotherThing()
'... feel free to have even more cases in between here
Else Then
doSomething() 'Non-compliant
End If
If I wanted to follow this Sonar rule and still make the code behave the same way, I'd have to add the negated version of each ElseIf-condition to the first If-condition.
Another example would be the following switch:
Select Case i
Case 0 To 40
value = 0
Case 41 To 60
value = 1
Case 61 To 80
value = 3
Case 81 To 100
value = 5
Case Else
value = 0 'Non-compliant
There shouldn't be anything wrong with having that last case in a switch. True, I could have initialized value beforehand to 0 and ignored that last case, but then I'd have one more assignment operation than necessary. And the Java ruleset has conditioned me to always put a default case in every switch.
S1764: Identical expressions should not be used on both sides of a binary operator
This rule does not seem to take into account that some functions may return different values every time you call them, for instance collections where accessing an element removes it from the collection:
stack.Push(stack.Pop() / stack.Pop()) 'Non-compliant
I understand if this is too much of an edge case to make special exceptions for it, though.
The following rules I am not actually sure about:
S3385: "Exit" statements should not be used
While I agree that Return is more readable than Exit Sub, is it really bad to use a single Exit For to break out of a For or a For Each loop? The SonarLint rule for Java permits the use of a single break; in a loop before flagging it as an issue. Is there a reason why the default in VB.NET is more strict in that regard? Or is the rule built on the assumption that you can solve nearly all your loop problems with LINQ extension methods and lambdas?
S2374: Signed types should be preferred to unsigned ones
This rule basically states that unsigned types should not be used at all because they "have different arithmetic operators than signed ones - operators that few developers understand". In my code I am only using UInteger for ID values (because I don't need negative values and a Long would be a waste of memory in my case). They are stored in List(Of UInteger) and only ever compared to other UIntegers. Is this rule even relevant to my case (are comparisons part of these "arithmetic operators" mentioned by the rule) and what exactly would be the pitfall? And if not, wouldn't it be better to make that rule apply to arithmetic operations involving unsigned types, rather than their declaration?
S2355: Array literals should be used instead of array creation expressions
Maybe I don't know VB.NET well enough, but how exactly would I satisfy this rule in the following case where I want to create a fixed-size array where the initialization length is only known at runtime? Is this a false-positive?
Dim myObjects As Object() = New Object(someOtherList.Count - 3) {} 'Non-compliant
Sure, I could probably just use a List(Of Object). But I am curious anyway.
Thanks for raising these points. Note that not all rules apply every time. There are cases when we need to balance between false positives/false negatives/real cases. For example with identical expressions on both sides of an operator rule. Is it a bug to have the same operands? No it's not. If it was, then the compiler would report it. Is it a bad smell, is it usually a mistake? Yes in many cases. See this for example in Roslyn. Should we tune this rule to exclude some cases? Yes we should, there's nothing wrong with 2 << 2. So there's a lot of balancing that needs to happen, and we try to settle for an implementation that brings the most value for the users.
For the points you raised:
Two branches in the same conditional structure should not have exactly the same implementation
This rule generally states that having two blocks of code match exactly is a bad sign. Copy-pasted code should be avoided for many reasons, for example if you need to fix the code in one place, you'll need to fix it in the other too. You're right that adding negated conditions would be a mess, but if you extract each condition into its own method (and call the negated methods inside them) with proper names, then it would probably improves the readability of your code.
For the Select Case, again, copy pasted code is always a bad sign. In this case you could do this:
Select Case i
...
Case 0 To 40
Case Else
value = 0 ' Compliant
End Select
Or simply remove the 0-40 case.
Identical expressions should not be used on both sides of a binary operator
I think this is a corner case. See the first paragraph of the answer.
"Exit" statements should not be used
It's almost always true that by choosing another type of loop, or changing the stop condition, you can get away without using any "Exit" statements. It's good practice to have a single exit point from loops.
Signed types should be preferred to unsigned ones
This is a legacy rule from SonarQube VB.NET, and I agree with you that it shouldn't be enabled by default in SonarLint. I created the following ticket in our JIRA: https://jira.sonarsource.com/browse/SLVS-1074
Array literals should be used instead of array creation expressions
Yes, it seems to be a false positive, we shouldn't report on array creations when the size is explicitly specified. https://jira.sonarsource.com/browse/SLVS-1075

Why not have operators as both keywords and functions?

I saw this question and it got me wondering.
Ignoring the fact that pretty much all languages have to be backwards compatible, is there any reason we cannot use operators as both keywords and functions, depending on if it's immediately followed by a parenthesis? Would it make the grammar harder?
I'm thinking mostly of python, but also C-like languages.
Perl does something very similar to this, and the results are sometimes surprising. You'll find warnings about this in many Perl texts; for example, this one comes from the standard distributed Perl documentation (man perlfunc):
Any function in the list below may be used either with or without parentheses around its arguments. (The syntax descriptions omit the parentheses.) If you use parentheses, the simple but occasionally surprising rule is this: It looks like a function, therefore it is a function, and precedence doesn't matter. Otherwise it's a list operator or unary operator, and precedence does matter. Whitespace between the function and left parenthesis doesn't count, so sometimes you need to be careful:
print 1+2+4; # Prints 7.
print(1+2) + 4; # Prints 3.
print (1+2)+4; # Also prints 3!
print +(1+2)+4; # Prints 7.
print ((1+2)+4); # Prints 7.
An even more surprising case, which often bites newcomers:
print
(a % 7 == 0 || a % 7 == 1) ? "good" : "bad";
will print 0 or 1.
In short, it depends on your theory of parsing. Many people believe that parsing should be precise and predictable, even when that results in surprising parses (as in the Python example in the linked question, or even more famously, C++'s most vexing parse). Others lean towards Perl's "Do What I Mean" philosophy, even though the result -- as above -- is sometimes rather different from what the programmer actually meant.
C, C++ and Python all tend towards the "precise and predictable" philosophy, and they are unlikely to change now.
Depending on the language, not() is not defined. If not() is not defined in some language, you can not use it. Why not() is not defined in some language? Because creator of that language probably had not need this type of language construction. Because it is better to let things be simpler.

Right way to forcibly convert Maybe a to a in Elm, failing clearly for Nothings

Okay, what I really wanted to do is, I have an Array and I want to choose a random element from it. The obvious thing to do is get an integer from a random number generator between 0 and the length minus 1, which I have working already, and then applying Array.get, but that returns a Maybe a. (It appears there's also a package function that does the same thing.) Coming from Haskell, I get the type significance that it's protecting me from the case where my index was out of range, but I have control over the index and don't expect that to happen, so I'd just like to assume I got a Just something and somewhat forcibly convert to a. In Haskell this would be fromJust or, if I was feeling verbose, fromMaybe (error "some message"). How should I do this in Elm?
I found a discussion on the mailing list that seems to be discussing this, but it's been a while and I don't see the function I want in the standard library where the discussion suggests it would be.
Here are some pretty unsatisfying potential solutions I found so far:
Just use withDefault. I do have a default value of a available, but I don't like this as it gives the completely wrong meaning to my code and will probably make debugging harder down the road.
Do some fiddling with ports to interface with Javascript and get an exception thrown there if it's Nothing. I haven't carefully investigated how this works yet, but apparently it's possible. But this just seems to mix up too many dependencies for what would otherwise be simple pure Elm.
(answering my own question)
I found two more-satisfying solutions:
Roll my own partially defined function, which was referenced elsewhere in the linked discussion. But the code kind of feels incomplete this way (I'd hope the compiler would warn me about incomplete pattern matches some day) and the error message is still unclear.
Pattern-match and use Debug.crash if it's a Nothing. This appears similar to Haskell's error and is the solution I'm leaning towards right now.
import Debug
fromJust : Maybe a -> a
fromJust x = case x of
Just y -> y
Nothing -> Debug.crash "error: fromJust Nothing"
(Still, the module name and description also make me hesitate because it doesn't seem like the "right" method intended for my purposes; I want to indicate true programmer error instead of mere debugging.)
Solution
The existence or use of a fromJust or equivalent function is actually code smell and tells you that the API has not been designed correctly. The problem is that you're attempting to make a decision on what to do before you have the information to do it. You can think of this in two cases:
If you know what you're supposed to do with Nothing, then the solution is simple: use withDefault. This will become obvious when you're looking at the right point in your code.
If you don't know what you're supposed to do in the case where you have Nothing, but you still want to make a change, then you need a different way of doing so. Instead of pulling the value out of the Maybe use Maybe.map to change the value while keeping the Maybe. As an example, let's say you're doing the following:
foo : Maybe Int -> Int
foo maybeVal =
let
innerVal = fromJust maybeVal
in
innerVal + 2
Instead, you'll want this:
foo : Maybe Int -> Maybe Int
foo maybeVal =
Maybe.map (\innerVal -> innerVal + 2) maybeVal
Notice that the change you wanted is still done in this case, you've simply not handled the case where you have a Nothing. You can now pass this value up and down the call chain until you've hit a place where it's natural to use withDefault to get rid of the Maybe.
What's happened is that we've separated the concerns of "How do I change this value" and "What do I do when it doesn't exist?". We deal with the former using Maybe.map and the latter with Maybe.withDefault.
Caveat
There are a small number of cases where you simply know that you have a Just value and need to eliminate it using fromJust as you described, but those cases should be few and far between. There's quite a few that actually have a simpler alternative.
Example: Attempting to filter a list and get the value out.
Let's say you have a list of Maybes that you want the values of. A common strategy might be:
foo : List (Maybe a) -> List a
foo hasAnything =
let
onlyHasJustValues = List.filter Maybe.isJust hasAnything
onlyHasRealValues = List.map fromJust onlyHasJustValues
in
onlyHasRealValues
Turns out that even in this case, there are clean ways to avoid fromJust. Most languages with a collection that has a map and a filter have a method to filter using a Maybe built in. Haskell has Maybe.mapMaybe, Scala has flatMap, and Elm has List.filterMap. This transforms your code into:
foo : List (Maybe a) -> List a
foo hasAnything =
let
onlyHasRealValues = List.filterMap (\x -> x) hasAnything
in
onlyHasRealValues