How can I QuickCheck value wrapped by effect in PureScript? - quickcheck

I have a shuffle function for Array:
shuffle:: forall e. Array -> Eff (random :: RANDOM | e) Array
It shuffles an array in a Control.Monad.Eff.Random monad and returns the wrapped one. I want to test the array is shuffled, like to compare the result is different, so I write QuickCheck code like:
quickCheck \arr -> isShuffled (shuffle arr)
However, I'm not sure how to write isShuffled to match type definitions. Since:
There is no unwrapping function like fromJust in Maybe, so it must accept Random Array and return Random Boolean, while I put the checking code in the Monadic expression.
Therefore, the result of isShuffled will not be plain Boolean, but like m Boolean
There is no suitable Testable in purescript-quickcheck for m Boolean, so I may need to create one instance for it, while the comment in QuickCheck refers:
A testable property is a function of zero or more Arbitrary arguments, returning a Boolean or Result. (code)
However, again, I cannot extract/unwrap a value from Random monad, I don't know how to access the boolean inside it and to implement like testableRandomArray to have Boolean or Result from a Random Boolean, unless I use some unsafe features.
I think I should "embed" the line of quickCheck inside a Random monad so I can access the pure Array it shuffled. However, since it is quickCheck to generate the test fixtures, I feel this is weird and no way to do that.

Related

How do I read / interpret this Kotlin code effectively?

I know how to read/interpret Java code and I can write it. However being new to kotlin I find code like below hard to read. Perhaps I am missing key concepts in the language.
But, how would you go about interpreting this code? Where do you propose one to start reading it in order to understand this piece of code quickly and efficiently? Left to right? Right to left? Break down parameters first? Look at return values?
inline fun <T : Any, R> ifNotNull(input: T?, callback: (T) -> R): R? {
return input?.let(callback)
}
So, like Java this is a generic function. It has two type parameters T which is of type 'Any' ('Any' is like 'Object' in Java) and R. The input parameter is a nullable T, as denoted by the question mark. Nullable types mean that the value can be null. The other function parameter is a function that takes in a T (non nullable type) and returns R. The return type of the function is a nullable R. The body of the function says that if input is not null, call and pass that to the callback and return that value. If input is null, then null is what gets returned.
Let's dissect the function definition piece by piece:
inline: Indicates that the code of the function will be copied directly to the call site, rather than being called like a normal function.
fun: We're defining a function.
<T : Any, R>: The function takes two generic type parameters, T and R. The T type is restricted to the Any type (which is Kotlin's Object-type). That might seem redundant, but what it actually says is that T cannot be a nullable type (Any?).
ifNotNull: The name of the function.
input: T?: The first parameter of type T?. We can put the ? on the T type here because we restricted it to non-nullable types in the type declaration.
callback: (T) -> R: The second parameter is of type (T) -> R, which is a function type. It's the type of a function that takes a T as input and returns an R.
: R?: The function returns a value of type R or null.
return input?.let(callback): The function body. The let function takes a function parameter, calls it with its receiver (input), and then returns the result of the function. The ? after input says that let will be called only if input is not null. If input is null, then the expression will return null.
The function is equivalent to this Java method (except for the inlining and nullable types):
public <T, R> R ifNotNull(final T input, final Function<T, R> callback) {
if (input == null) {
return null;
}
return callback.apply(input);
}
Matt's answer explains everything well in one go; I'll try to look at how you might go about reading such code.
Skipping over the first word for now, the most important thing is the second word: fun.  So the whole thing is defining a function.  That tells you what to expect from the rest.
The braces tell you that it's a block function, not a one-liner, so the basic structure you're expecting is: fun name(params): returnType { code }.  The rest is filling in the blanks!  (This fits the general pattern of Kotlin declarations, where the type comes second, after a colon.  The Java equivalent would of course be more like returnType name(params) { code }.)
As with Java, the stuff in angle brackets is giving generic parameters, so we can skip that for now and go straight to the next most important bit, which is the name of the function being defined: ifNotNull.
Armed with those, we can read the rest.  inline is a simple modifier, telling you that the function will be inlined by the compiler.  (That enables a few things and restricts a few others, but I wouldn't worry about that now.)
The <T : Any, R> gives the generic parameter types that the function uses.  The first is T, which must be Any or a subtype; the second is R, which is unrestricted.
(Any is like Java's Object, but can't be null; the topmost type is the related Any?, which also allows null.  So except for the nullability, that's equivalent to the Java <T extends Object, R>.)
Going on, we have the function parameters in parentheses.  Again, there are two: the first is called input, and it's of type T?, which means it accepts any value of type T, and also accepts null.  The second parameter is called callback, and has a more complicated type, (T) -> R: it's a function which takes a T as its parameter, and returns an R.  (Java doesn't have function types as such, so that probably looks strangest.  Java's nearest equivalent is Function<R, T>.)
After the parentheses comes the return type of this function itself, R?, which means it can return either an R or null.
Finally, in braces is the actual code of the function.  That has one line, which returns the value of an expression.  (Its effect is to check whether the value of input is null: if so, it returns the null directly.  Otherwise, it calls the callback function given in the parameter, passing input as its parameter, and returns its result.)
Although that's a short declaration, it's quite abstract and packs a lot in, so it's no wonder you're finding it hard going!  (The format is similar to a Java method declaration — but Kotlin's quite expressive, so equivalent code tends to be quite a bit shorter than Java.  And the generics make it more complex.)  If you're just starting to learn Kotlin, I'd suggest something a bit easier :-)
(The good news is that, as in Java, you don't often need to read the stdlib code.  Although Kotlin's doc comments are rarely up to the exemplary level of Java's, they're still usually enough.)

How to construct a Complex from a String using Python's C-API?

How to use the Python C-API for the Complex class (documented here) to:
convert a general PyObject (which might be a String, Long, Float, Complex) into a Complex PyObject?
convert a Complex PyObject into String PyObject?
Python has a complex() function, documented here):
Return a complex number with the value real + imag*j or convert a
string or number to a complex number. If the first parameter is a
string, it will be interpreted as a complex number and the function
must be called without a second parameter. The second parameter can
never be a string. Each argument may be any numeric type (including
complex). If imag is omitted, it defaults to zero and the function
serves as a numeric conversion function like int(), long() and
float(). If both arguments are omitted, returns 0j.
However, it isn't obvious which API function (if any) is backing it.
It would appear none of them, is the above paragraph talks about two PyObject* parameters, and none of the API functions listed match that signature.
When in doubt, do what Python does: call the constructor.
PyObject *c1 = PyObject_CallFunction(PyComplex_Type, "s", "1+2j");
If (!c1)
return NULL;

Does fortran permit inline operations on the return value of a function?

I am trying to design a data structure composed of objects which contain, as instance variables, objects of another type.
I'd like to be able to do something like this:
CALL type1_object%get_nested_type2_object()%some_type2_method()
Notice I am trying to immediately use the getter, get_nested_type2_object() and then act on its return value to call a method in the returned type2 object.
As it stands, gfortran v4.8.2 does not accept this syntax and thinks get_nested_type2_object() is an array reference, not a function call. Is there any syntax that I can use to clarify this or does the standard not allow this?
To give a more concrete example, here is some code illustrating this:
furniture_class.F95:
MODULE furniture_class
IMPLICIT NONE
TYPE furniture_object
INTEGER :: length
INTEGER :: width
INTEGER :: height
CONTAINS
PROCEDURE :: get_length
END TYPE furniture_object
CONTAINS
FUNCTION get_length(self)
IMPLICIT NONE
CLASS(furniture_object) :: self
INTEGER :: get_length
get_length = self%length
END FUNCTION
END MODULE furniture_class
Now a room object may contain one or more furniture objects.
room_class.F95:
MODULE room_class
USE furniture_class
IMPLICIT NONE
TYPE :: room_object
CLASS(furniture_object), POINTER :: furniture
CONTAINS
PROCEDURE :: get_furniture
END TYPE room_object
CONTAINS
FUNCTION get_furniture(self)
USE furniture_class
IMPLICIT NONE
CLASS(room_object) :: self
CLASS(furniture_object), POINTER :: get_furniture
get_furniture => self%furniture
END FUNCTION get_furniture
END MODULE room_class
Finally, here is a program where I attempt to access the furniture object inside the room (but the compiler won't let me):
room_test.F95
PROGRAM room_test
USE room_class
USE furniture_class
IMPLICIT NONE
CLASS(room_object), POINTER :: room_pointer
CLASS(furniture_object), POINTER :: furniture_pointer
ALLOCATE(room_pointer)
ALLOCATE(furniture_pointer)
room_pointer%furniture => furniture_pointer
furniture_pointer%length = 10
! WRITE(*,*) 'The length of furniture in the room is', room_pointer%furniture%get_length() - This works.
WRITE(*,*) 'The length of furniture in the room is', room_pointer%get_furniture()%get_length() ! This line fails to compile
END PROGRAM room_test
I can of course directly access the furniture object if I don't use a getter to return the nested object, but this ruins the encapsulation and can become problematic in production code that is much more complex than what I show here.
Is what I am trying to do not supported by the Fortran standard or do I just need a more compliant compiler?
What you want to do is not supported by the syntax of the standard language.
(Variations on the general syntax (not necessarily this specific case) that might apply for "dereferencing" a function result could be ambiguous - consider things like substrings, whole array references, array sections, etc.)
Typically you [pointer] assign the result of the first function call to a [pointer] variable of the appropriate type, and then apply the binding for the second function to that variable.
Alternatively, if you want to apply an operation to a primary in an expression (such as a function reference) to give another value, then you could use an operator.
Some, perhaps rather subjective, comments:
Your room object doesn't really contain a furniture object - it holds a reference to a furniture object. Perhaps you use that reference in a manner that implies the parent object "containing" it, but that's not what the component definition naturally suggests.
(Use of a pointer component suggests that you want the room to point at (i.e. reference) some furniture. In terms of the language, the object referenced by a pointer component is not usually considered part of the value of the parent object of the component - consider how intrinsic assignment works, restrictions around modifying INTENT(IN) arguments, etc.
A non-pointer component suggests to me that the furniture is part of the room. In a Fortran language sense an object that is a non-pointer component it is always part of the value of the parent object of the component.
To highlight - pointer components in different rooms could potentially point at the same piece of furniture; a non-pointer furniture object is only ever directly part of one room.)
You need to be very careful using functions with pointer results. In the general case, is it:
p = some_ptr_function(args)
(and perhaps I accidentally leak memory) or
p => some_ptr_function(args)
Only one little character difference, both valid syntax, quite different semantics. If the second case is what is intended, then why not just pass the pointer back via a subroutine argument? An inconsequential difference in typing and it is much safer.
A general reminder applicable to some of the above - in the context of an expression, evaluation of a function reference yields a value. Values are not variables and hence you are not permitted to vary [modify] them.

Erlang Looping through a list (or set) to process files

I want to create 16 directories in Erlang.
for ( create_dir("work/p" ++ A, where A is an element in a list [0, 1, ... f]) (sixteen number in hex notation).
I could of course write sixteen lines like: mkdir ("work/p0"), mkdir("work/p1") etc.
I have looked at lists:foreach. In the examples fun is used, is possible to define a function outside the loop and call it?
I am new to Erlang and used to C++ etc.
Yes, it's possible to define a (named) function outside the call to lists:foreach/2. Why would you, though? This is a case when an anonymous function is incredibly handy:
lists:foreach(fun(N) ->
file:make_dir(
filename:join("work", "p"++integer_to_list(N, 16)))
end, lists:seq(0, 15)).
The filename:join/2 call will use the appropriate directory separator to construct the string work/pN, where N is an integer in hex representation constructed using integer_to_list/2, which converts an integer to a string (list) in a given base (16).
lists:seq/2 is a friendly little function that returns the list [A,A+1,A+2,...,B-1,B] given A and B.
Note that you could just as well have used the list comprehension syntax here, but since we're applying functions to a list for the side-effects alone, I chose to stick with a foreach.
If you really want to define a separate function -- let's call it foo and assume it takes 42 arguments -- you can refer to it as fun foo/42 in your code. This expression evaluates to a function object that, like an anonymous function defined inline, can be passed to lists:foreach/2.

Monad in plain English? (For the OOP programmer with no FP background)

In terms that an OOP programmer would understand (without any functional programming background), what is a monad?
What problem does it solve and what are the most common places it's used?
Update
To clarify the kind of understanding I was looking for, let’s say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads to the OOP app?
UPDATE: This question was the subject of an immensely long blog series, which you can read at Monads — thanks for the great question!
In terms that an OOP programmer would understand (without any functional programming background), what is a monad?
A monad is an "amplifier" of types that obeys certain rules and which has certain operations provided.
First, what is an "amplifier of types"? By that I mean some system which lets you take a type and turn it into a more special type. For example, in C# consider Nullable<T>. This is an amplifier of types. It lets you take a type, say int, and add a new capability to that type, namely, that now it can be null when it couldn't before.
As a second example, consider IEnumerable<T>. It is an amplifier of types. It lets you take a type, say, string, and add a new capability to that type, namely, that you can now make a sequence of strings out of any number of single strings.
What are the "certain rules"? Briefly, that there is a sensible way for functions on the underlying type to work on the amplified type such that they follow the normal rules of functional composition. For example, if you have a function on integers, say
int M(int x) { return x + N(x * 2); }
then the corresponding function on Nullable<int> can make all the operators and calls in there work together "in the same way" that they did before.
(That is incredibly vague and imprecise; you asked for an explanation that didn't assume anything about knowledge of functional composition.)
What are the "operations"?
There is a "unit" operation (confusingly sometimes called the "return" operation) that takes a value from a plain type and creates the equivalent monadic value. This, in essence, provides a way to take a value of an unamplified type and turn it into a value of the amplified type. It could be implemented as a constructor in an OO language.
There is a "bind" operation that takes a monadic value and a function that can transform the value, and returns a new monadic value. Bind is the key operation that defines the semantics of the monad. It lets us transform operations on the unamplified type into operations on the amplified type, that obeys the rules of functional composition mentioned before.
There is often a way to get the unamplified type back out of the amplified type. Strictly speaking this operation is not required to have a monad. (Though it is necessary if you want to have a comonad. We won't consider those further in this article.)
Again, take Nullable<T> as an example. You can turn an int into a Nullable<int> with the constructor. The C# compiler takes care of most nullable "lifting" for you, but if it didn't, the lifting transformation is straightforward: an operation, say,
int M(int x) { whatever }
is transformed into
Nullable<int> M(Nullable<int> x)
{
if (x == null)
return null;
else
return new Nullable<int>(whatever);
}
And turning a Nullable<int> back into an int is done with the Value property.
It's the function transformation that is the key bit. Notice how the actual semantics of the nullable operation — that an operation on a null propagates the null — is captured in the transformation. We can generalize this.
Suppose you have a function from int to int, like our original M. You can easily make that into a function that takes an int and returns a Nullable<int> because you can just run the result through the nullable constructor. Now suppose you have this higher-order method:
static Nullable<T> Bind<T>(Nullable<T> amplified, Func<T, Nullable<T>> func)
{
if (amplified == null)
return null;
else
return func(amplified.Value);
}
See what you can do with that? Any method that takes an int and returns an int, or takes an int and returns a Nullable<int> can now have the nullable semantics applied to it.
Furthermore: suppose you have two methods
Nullable<int> X(int q) { ... }
Nullable<int> Y(int r) { ... }
and you want to compose them:
Nullable<int> Z(int s) { return X(Y(s)); }
That is, Z is the composition of X and Y. But you cannot do that because X takes an int, and Y returns a Nullable<int>. But since you have the "bind" operation, you can make this work:
Nullable<int> Z(int s) { return Bind(Y(s), X); }
The bind operation on a monad is what makes composition of functions on amplified types work. The "rules" I handwaved about above are that the monad preserves the rules of normal function composition; that composing with identity functions results in the original function, that composition is associative, and so on.
In C#, "Bind" is called "SelectMany". Take a look at how it works on the sequence monad. We need to have two things: turn a value into a sequence and bind operations on sequences. As a bonus, we also have "turn a sequence back into a value". Those operations are:
static IEnumerable<T> MakeSequence<T>(T item)
{
yield return item;
}
// Extract a value
static T First<T>(IEnumerable<T> sequence)
{
// let's just take the first one
foreach(T item in sequence) return item;
throw new Exception("No first item");
}
// "Bind" is called "SelectMany"
static IEnumerable<T> SelectMany<T>(IEnumerable<T> seq, Func<T, IEnumerable<T>> func)
{
foreach(T item in seq)
foreach(T result in func(item))
yield return result;
}
The nullable monad rule was "to combine two functions that produce nullables together, check to see if the inner one results in null; if it does, produce null, if it does not, then call the outer one with the result". That's the desired semantics of nullable.
The sequence monad rule is "to combine two functions that produce sequences together, apply the outer function to every element produced by the inner function, and then concatenate all the resulting sequences together". The fundamental semantics of the monads are captured in the Bind/SelectMany methods; this is the method that tells you what the monad really means.
We can do even better. Suppose you have a sequences of ints, and a method that takes ints and results in sequences of strings. We could generalize the binding operation to allow composition of functions that take and return different amplified types, so long as the inputs of one match the outputs of the other:
static IEnumerable<U> SelectMany<T,U>(IEnumerable<T> seq, Func<T, IEnumerable<U>> func)
{
foreach(T item in seq)
foreach(U result in func(item))
yield return result;
}
So now we can say "amplify this bunch of individual integers into a sequence of integers. Transform this particular integer into a bunch of strings, amplified to a sequence of strings. Now put both operations together: amplify this bunch of integers into the concatenation of all the sequences of strings." Monads allow you to compose your amplifications.
What problem does it solve and what are the most common places it's used?
That's rather like asking "what problems does the singleton pattern solve?", but I'll give it a shot.
Monads are typically used to solve problems like:
I need to make new capabilities for this type and still combine old functions on this type to use the new capabilities.
I need to capture a bunch of operations on types and represent those operations as composable objects, building up larger and larger compositions until I have just the right series of operations represented, and then I need to start getting results out of the thing
I need to represent side-effecting operations cleanly in a language that hates side effects
C# uses monads in its design. As already mentioned, the nullable pattern is highly akin to the "maybe monad". LINQ is entirely built out of monads; the SelectMany method is what does the semantic work of composition of operations. (Erik Meijer is fond of pointing out that every LINQ function could actually be implemented by SelectMany; everything else is just a convenience.)
To clarify the kind of understanding I was looking for, let's say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads into the OOP app?
Most OOP languages do not have a rich enough type system to represent the monad pattern itself directly; you need a type system that supports types that are higher types than generic types. So I wouldn't try to do that. Rather, I would implement generic types that represent each monad, and implement methods that represent the three operations you need: turning a value into an amplified value, (maybe) turning an amplified value into a value, and transforming a function on unamplified values into a function on amplified values.
A good place to start is how we implemented LINQ in C#. Study the SelectMany method; it is the key to understanding how the sequence monad works in C#. It is a very simple method, but very powerful!
Suggested, further reading:
For a more in-depth and theoretically sound explanation of monads in C#, I highly recommend my (Eric Lippert's) colleague Wes Dyer's article on the subject. This article is what explained monads to me when they finally "clicked" for me.
The Marvels of Monads
A good illustration of why you might want a monad around (uses Haskell in it's examples).
You Could Have Invented Monads! (And Maybe You Already Have.) by Dan Piponi
Sort of, "translation" of the previous article to JavaScript.
Translation from Haskell to JavaScript of selected portions of the best introduction to monads I’ve ever read by James Coglan
Why do we need monads?
We want to program only using functions. ("functional programming" after all -FP).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP. How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have composers/linkers g >>= f to help connecting g's output to f's input, so we don't have to change f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness !!!!
I would say the closest OO analogy to monads is the "command pattern".
In the command pattern you wrap an ordinary statement or expression in a command object. The command object expose an execute method which executes the wrapped statement. So statement are turned into first class objects which can passed around and executed at will. Commands can be composed so you can create a program-object by chaining and nesting command-objects.
The commands are executed by a separate object, the invoker. The benefit of using the command pattern (rather than just execute a series of ordinary statements) is that different invokers can apply different logic to how the commands should be executed.
The command pattern could be used to add (or remove) language features which is not supported by the host language. For example, in a hypothetical OO language without exceptions, you could add exception semantics by exposing "try" and "throw" methods to the commands. When a command calls throw, the invoker backtracks through the list (or tree) of commands until the last "try" call. Conversely, you could remove exception semantic from a language (if you believe exceptions are bad) by catching all exceptions thrown by each individual commands, and turning them into error codes which are then passed to the next command.
Even more fancy execution semantics like transactions, non-deterministic execution or continuations can be implemented like this in a language which doesn't support it natively. It is a pretty powerful pattern if you think about it.
Now in reality the command-patterns is not used as a general language feature like this. The overhead of turning each statement into a separate class would lead to an unbearable amount of boilerplate code. But in principle it can be used to solve the same problems as monads are used to solve in fp.
In terms that an OOP programmer would
understand (without any functional
programming background), what is a
monad?
What problem does it solve and what
are the most common places it's used?are the most common places it's used?
In terms of OO programming, a monad is an interface (or more likely a mixin), parameterized by a type, with two methods, return and bind that describe:
How to inject a value to get a
monadic value of that injected value
type;
How to use a function that
makes a monadic value from a
non-monadic one, on a monadic value.
The problem it solves is the same type of problem you'd expect from any interface, namely,
"I have a bunch of different classes that do different things, but seem to do those different things in a way that has an underlying similarity. How can I describe that similarity between them, even if the classes themselves aren't really subtypes of anything closer than 'the Object' class itself?"
More specifically, the Monad "interface" is similar to IEnumerator or IIterator in that it takes a type that itself takes a type. The main "point" of Monad though is being able to connect operations based on the interior type, even to the point of having a new "internal type", while keeping - or even enhancing - the information structure of the main class.
You have a recent presentation "Monadologie -- professional help on type anxiety" by Christopher League (July 12th, 2010), which is quite interesting on topics of continuation and monad.
The video going with this (slideshare) presentation is actually available at vimeo.
The Monad part start around 37 minutes in, on this one hour video, and starts with slide 42 of its 58 slide presentation.
It is presented as "the leading design pattern for functional programming", but the language used in the examples is Scala, which is both OOP and functional.
You can read more on Monad in Scala in the blog post "Monads - Another way to abstract computations in Scala", from Debasish Ghosh (March 27, 2008).
A type constructor M is a monad if it supports these operations:
# the return function
def unit[A] (x: A): M[A]
# called "bind" in Haskell
def flatMap[A,B] (m: M[A]) (f: A => M[B]): M[B]
# Other two can be written in term of the first two:
def map[A,B] (m: M[A]) (f: A => B): M[B] =
flatMap(m){ x => unit(f(x)) }
def andThen[A,B] (ma: M[A]) (mb: M[B]): M[B] =
flatMap(ma){ x => mb }
So for instance (in Scala):
Option is a monad
def unit[A] (x: A): Option[A] = Some(x)
def flatMap[A,B](m:Option[A])(f:A =>Option[B]): Option[B] =
m match {
case None => None
case Some(x) => f(x)
}
List is Monad
def unit[A] (x: A): List[A] = List(x)
def flatMap[A,B](m:List[A])(f:A =>List[B]): List[B] =
m match {
case Nil => Nil
case x::xs => f(x) ::: flatMap(xs)(f)
}
Monad are a big deal in Scala because of convenient syntax built to take advantage of Monad structures:
for comprehension in Scala:
for {
i <- 1 to 4
j <- 1 to i
k <- 1 to j
} yield i*j*k
is translated by the compiler to:
(1 to 4).flatMap { i =>
(1 to i).flatMap { j =>
(1 to j).map { k =>
i*j*k }}}
The key abstraction is the flatMap, which binds the computation through chaining.
Each invocation of flatMap returns the same data structure type (but of different value), that serves as the input to the next command in chain.
In the above snippet, flatMap takes as input a closure (SomeType) => List[AnotherType] and returns a List[AnotherType]. The important point to note is that all flatMaps take the same closure type as input and return the same type as output.
This is what "binds" the computation thread - every item of the sequence in the for-comprehension has to honor this same type constraint.
If you take two operations (that may fail) and pass the result to the third, like:
lookupVenue: String => Option[Venue]
getLoggedInUser: SessionID => Option[User]
reserveTable: (Venue, User) => Option[ConfNo]
but without taking advantage of Monad, you get convoluted OOP-code like:
val user = getLoggedInUser(session)
val confirm =
if(!user.isDefined) None
else lookupVenue(name) match {
case None => None
case Some(venue) =>
val confno = reserveTable(venue, user.get)
if(confno.isDefined)
mailTo(confno.get, user.get)
confno
}
whereas with Monad, you can work with the actual types (Venue, User) like all the operations work, and keep the Option verification stuff hidden, all because of the flatmaps of the for syntax:
val confirm = for {
venue <- lookupVenue(name)
user <- getLoggedInUser(session)
confno <- reserveTable(venue, user)
} yield {
mailTo(confno, user)
confno
}
The yield part will only be executed if all three functions have Some[X]; any None would directly be returned to confirm.
So:
Monads allow ordered computation within Functional Programing, that allows us to model sequencing of actions in a nice structured form, somewhat like a DSL.
And the greatest power comes with the ability to compose monads that serve different purposes, into extensible abstractions within an application.
This sequencing and threading of actions by a monad is done by the language compiler that does the transformation through the magic of closures.
By the way, Monad is not only model of computation used in FP:
Category theory proposes many models of computation. Among them
the Arrow model of computations
the Monad model of computations
the Applicative model of computations
To respect fast readers, I start with precise definition first,
continue with quick more "plain English" explanation, and then move to examples.
Here is a both concise and precise definition slightly reworded:
A monad (in computer science) is formally a map that:
sends every type X of some given programming language to a new type T(X) (called the "type of T-computations with values in X");
equipped with a rule for composing two functions of the form
f:X->T(Y) and g:Y->T(Z) to a function g∘f:X->T(Z);
in a way that is associative in the evident sense and unital with respect to a given unit function called pure_X:X->T(X), to be thought of as taking a value to the pure computation that simply returns that value.
So in simple words, a monad is a rule to pass from any type X to another type T(X), and a rule to pass from two functions f:X->T(Y) and g:Y->T(Z) (that you would like to compose but can't) to a new function h:X->T(Z). Which, however, is not the composition in strict mathematical sense. We are basically "bending" function's composition or re-defining how functions are composed.
Plus, we require the monad's rule of composing to satisfy the "obvious" mathematical axioms:
Associativity: Composing f with g and then with h (from outside) should be the same as composing g with h and then with f (from inside).
Unital property: Composing f with the identity function on either side should yield f.
Again, in simple words, we can't just go crazy re-defining our function composition as we like:
We first need the associativity to be able to compose several functions in a row e.g. f(g(h(k(x))), and not to worry about specifying the order composing function pairs. As the monad rule only prescribes how to compose a pair of functions, without that axiom, we would need to know which pair is composed first and so on. (Note that is different from the commutativity property that f composed with g were the same as g composed with f, which is not required).
And second, we need the unital property, which is simply to say that identities compose trivially the way we expect them. So we can safely refactor functions whenever those identities can be extracted.
So again in brief: A monad is the rule of type extension and composing functions satisfying the two axioms -- associativity and unital property.
In practical terms, you want the monad to be implemented for you by the language, compiler or framework that would take care of composing functions for you. So you can focus on writing your function's logic rather than worrying how their execution is implemented.
That is essentially it, in a nutshell.
Being professional mathematician, I prefer to avoid calling h the "composition" of f and g. Because mathematically, it isn't. Calling it the "composition" incorrectly presumes that h is the true mathematical composition, which it isn't. It is not even uniquely determined by f and g. Instead, it is the result of our monad's new "rule of composing" the functions. Which can be totally different from the actual mathematical composition even if the latter exists!
To make it less dry, let me try to illustrate it by example
that I am annotating with small sections, so you can skip right to the point.
Exception throwing as Monad examples
Suppose we want to compose two functions:
f: x -> 1 / x
g: y -> 2 * y
But f(0) is not defined, so an exception e is thrown. Then how can you define the compositional value g(f(0))? Throw an exception again, of course! Maybe the same e. Maybe a new updated exception e1.
What precisely happens here? First, we need new exception value(s) (different or same). You can call them nothing or null or whatever but the essence remains the same -- they should be new values, e.g. it should not be a number in our example here. I prefer not to call them null to avoid confusion with how null can be implemented in any specific language. Equally I prefer to avoid nothing because it is often associated with null, which, in principle, is what null should do, however, that principle often gets bended for whatever practical reasons.
What is exception exactly?
This is a trivial matter for any experienced programmer but I'd like to drop few words just to extinguish any worm of confusion:
Exception is an object encapsulating information about how the invalid result of execution occurred.
This can range from throwing away any details and returning a single global value (like NaN or null) or generating a long log list or what exactly happened, send it to a database and replicating all over the distributed data storage layer ;)
The important difference between these two extreme examples of exception is that in the first case there are no side-effects. In the second there are. Which brings us to the (thousand-dollar) question:
Are exceptions allowed in pure functions?
Shorter answer: Yes, but only when they don't lead to side-effects.
Longer answer. To be pure, your function's output must be uniquely determined by its input. So we amend our function f by sending 0 to the new abstract value e that we call exception. We make sure that value e contains no outside information that is not uniquely determined by our input, which is x. So here is an example of exception without side-effect:
e = {
type: error,
message: 'I got error trying to divide 1 by 0'
}
And here is one with side-effect:
e = {
type: error,
message: 'Our committee to decide what is 1/0 is currently away'
}
Actually, it only has side-effects if that message can possibly change in the future. But if it is guaranteed to never change, that value becomes uniquely predictable, and so there is no side-effect.
To make it even sillier. A function returning 42 ever is clearly pure. But if someone crazy decides to make 42 a variable that value might change, the very same function stops being pure under the new conditions.
Note that I am using the object literal notation for simplicity to demonstrate the essence. Unfortunately things are messed-up in languages like JavaScript, where error is not a type that behaves the way we want here with respect to function composition, whereas actual types like null or NaN do not behave this way but rather go through the some artificial and not always intuitive type conversions.
Type extension
As we want to vary the message inside our exception, we are really declaring a new type E for the whole exception object and then
That is what the maybe number does, apart from its confusing name, which is to be either of type number or of the new exception type E, so it is really the union number | E of number and E. In particular, it depends on how we want to construct E, which is neither suggested nor reflected in the name maybe number.
What is functional composition?
It is the mathematical operation taking functions
f: X -> Y and g: Y -> Z and constructing
their composition as function h: X -> Z satisfying h(x) = g(f(x)).
The problem with this definition occurs when the result f(x) is not allowed as argument of g.
In mathematics those functions cannot be composed without extra work.
The strictly mathematical solution for our above example of f and g is to remove 0 from the set of definition of f. With that new set of definition (new more restrictive type of x), f becomes composable with g.
However, it is not very practical in programming to restrict the set of definition of f like that. Instead, exceptions can be used.
Or as another approach, artificial values are created like NaN, undefined, null, Infinity etc. So you evaluate 1/0 to Infinity and 1/-0 to -Infinity. And then force the new value back into your expression instead of throwing exception. Leading to results you may or may not find predictable:
1/0 // => Infinity
parseInt(Infinity) // => NaN
NaN < 0 // => false
false + 1 // => 1
And we are back to regular numbers ready to move on ;)
JavaScript allows us to keep executing numerical expressions at any costs without throwing errors as in the above example. That means, it also allows to compose functions. Which is exactly what monad is about - it is a rule to compose functions satisfying the axioms as defined at the beginning of this answer.
But is the rule of composing function, arising from JavaScript's implementation for dealing with numerical errors, a monad?
To answer this question, all you need is to check the axioms (left as exercise as not part of the question here;).
Can throwing exception be used to construct a monad?
Indeed, a more useful monad would instead be the rule prescribing
that if f throws exception for some x, so does its composition with any g. Plus make the exception E globally unique with only one possible value ever (terminal object in category theory). Now the two axioms are instantly checkable and we get a very useful monad. And the result is what is well-known as the maybe monad.
A monad is a data type that encapsulates a value, and to which, essentially, two operations can be applied:
return x creates a value of the monad type that encapsulates x
m >>= f (read it as "the bind operator") applies the function f to the value in the monad m
That's what a monad is. There are a few more technicalities, but basically those two operations define a monad. The real question is, "What a monad does?", and that depends on the monad — lists are monads, Maybes are monads, IO operations are monads. All that it means when we say those things are monads is that they have the monad interface of return and >>=.
From wikipedia:
In functional programming, a monad is
a kind of abstract data type used to
represent computations (instead of
data in the domain model). Monads
allow the programmer to chain actions
together to build a pipeline, in which
each action is decorated with
additional processing rules provided
by the monad. Programs written in
functional style can make use of
monads to structure procedures that
include sequenced operations,1[2]
or to define arbitrary control flows
(like handling concurrency,
continuations, or exceptions).
Formally, a monad is constructed by
defining two operations (bind and
return) and a type constructor M that
must fulfill several properties to
allow the correct composition of
monadic functions (i.e. functions that
use values from the monad as their
arguments). The return operation takes
a value from a plain type and puts it
into a monadic container of type M.
The bind operation performs the
reverse process, extracting the
original value from the container and
passing it to the associated next
function in the pipeline.
A programmer will compose monadic
functions to define a data-processing
pipeline. The monad acts as a
framework, as it's a reusable behavior
that decides the order in which the
specific monadic functions in the
pipeline are called, and manages all
the undercover work required by the
computation.[3] The bind and return
operators interleaved in the pipeline
will be executed after each monadic
function returns control, and will
take care of the particular aspects
handled by the monad.
I believe it explains it very well.
I'll try to make the shortest definition I can manage using OOP terms:
A generic class CMonadic<T> is a monad if it defines at least the following methods:
class CMonadic<T> {
static CMonadic<T> create(T t); // a.k.a., "return" in Haskell
public CMonadic<U> flatMap<U>(Func<T, CMonadic<U>> f); // a.k.a. "bind" in Haskell
}
and if the following laws apply for all types T and their possible values t
left identity:
CMonadic<T>.create(t).flatMap(f) == f(t)
right identity
instance.flatMap(CMonadic<T>.create) == instance
associativity:
instance.flatMap(f).flatMap(g) == instance.flatMap(t => f(t).flatMap(g))
Examples:
A List monad may have:
List<int>.create(1) --> [1]
And flatMap on the list [1,2,3] could work like so:
intList.flatMap(x => List<int>.makeFromTwoItems(x, x*10)) --> [1,10,2,20,3,30]
Iterables and Observables can also be made monadic, as well as Promises and Tasks.
Commentary:
Monads are not that complicated. The flatMap function is a lot like the more commonly encountered map. It receives a function argument (also known as delegate), which it may call (immediately or later, zero or more times) with a value coming from the generic class. It expects that passed function to also wrap its return value in the same kind of generic class. To help with that, it provides create, a constructor that can create an instance of that generic class from a value. The return result of flatMap is also a generic class of the same type, often packing the same values that were contained in the return results of one or more applications of flatMap to the previously contained values. This allows you to chain flatMap as much as you want:
intList.flatMap(x => List<int>.makeFromTwo(x, x*10))
.flatMap(x => x % 3 == 0
? List<string>.create("x = " + x.toString())
: List<string>.empty())
It just so happens that this kind of generic class is useful as a base model for a huge number of things. This (together with the category theory jargonisms) is the reason why Monads seem so hard to understand or explain. They're a very abstract thing and only become obviously useful once they're specialized.
For example, you can model exceptions using monadic containers. Each container will either contain the result of the operation or the error that has occured. The next function (delegate) in the chain of flatMap callbacks will only be called if the previous one packed a value in the container. Otherwise if an error was packed, the error will continue to propagate through the chained containers until a container is found that has an error handler function attached via a method called .orElse() (such a method would be an allowed extension)
Notes: Functional languages allow you to write functions that can operate on any kind of a monadic generic class. For this to work, one would have to write a generic interface for monads. I don't know if its possible to write such an interface in C#, but as far as I know it isn't:
interface IMonad<T> {
static IMonad<T> create(T t); // not allowed
public IMonad<U> flatMap<U>(Func<T, IMonad<U>> f); // not specific enough,
// because the function must return the same kind of monad, not just any monad
}
Whether a monad has a "natural" interpretation in OO depends on the monad. In a language like Java, you can translate the maybe monad to the language of checking for null pointers, so that computations that fail (i.e., produce Nothing in Haskell) emit null pointers as results. You can translate the state monad into the language generated by creating a mutable variable and methods to change its state.
A monad is a monoid in the category of endofunctors.
The information that sentence puts together is very deep. And you work in a monad with any imperative language. A monad is a "sequenced" domain specific language. It satisfies certain interesting properties, which taken together make a monad a mathematical model of "imperative programming". Haskell makes it easy to define small (or large) imperative languages, which can be combined in a variety of ways.
As an OO programmer, you use your language's class hierarchy to organize the kinds of functions or procedures that can be called in a context, what you call an object. A monad is also an abstraction on this idea, insofar as different monads can be combined in arbitrary ways, effectively "importing" all of the sub-monad's methods into the scope.
Architecturally, one then uses type signatures to explicitly express which contexts may be used for computing a value.
One can use monad transformers for this purpose, and there is a high quality collection of all of the "standard" monads:
Lists (non-deterministic computations, by treating a list as a domain)
Maybe (computations that can fail, but for which reporting is unimportant)
Error (computations that can fail and require exception handling
Reader (computations that can be represented by compositions of plain Haskell functions)
Writer (computations with sequential "rendering"/"logging" (to strings, html etc)
Cont (continuations)
IO (computations that depend on the underlying computer system)
State (computations whose context contains a modifiable value)
with corresponding monad transformers and type classes. Type classes allow a complementary approach to combining monads by unifying their interfaces, so that concrete monads can implement a standard interface for the monad "kind". For example, the module Control.Monad.State contains a class MonadState s m, and (State s) is an instance of the form
instance MonadState s (State s) where
put = ...
get = ...
The long story is that a monad is a functor which attaches "context" to a value, which has a way to inject a value into the monad, and which has a way to evaluate values with respect to the context attached to it, at least in a restricted way.
So:
return :: a -> m a
is a function which injects a value of type a into a monad "action" of type m a.
(>>=) :: m a -> (a -> m b) -> m b
is a function which takes a monad action, evaluates its result, and applies a function to the result. The neat thing about (>>=) is that the result is in the same monad. In other words, in m >>= f, (>>=) pulls the result out of m, and binds it to f, so that the result is in the monad. (Alternatively, we can say that (>>=) pulls f into m and applies it to the result.) As a consequence, if we have f :: a -> m b, and g :: b -> m c, we can "sequence" actions:
m >>= f >>= g
Or, using "do notation"
do x <- m
y <- f x
g y
The type for (>>) might be illuminating. It is
(>>) :: m a -> m b -> m b
It corresponds to the (;) operator in procedural languages like C. It allows do notation like:
m = do x <- someQuery
someAction x
theNextAction
andSoOn
In mathematical and philosopical logic, we have frames and models, which are "naturally" modelled with monadism. An interpretation is a function which looks into the model's domain and computes the truth value (or generalizations) of a proposition (or formula, under generalizations). In a modal logic for necessity, we might say that a proposition is necessary if it is true in "every possible world" -- if it is true with respect to every admissible domain. This means that a model in a language for a proposition can be reified as a model whose domain consists of collection of distinct models (one corresponding to each possible world). Every monad has a method named "join" which flattens layers, which implies that every monad action whose result is a monad action can be embedded in the monad.
join :: m (m a) -> m a
More importantly, it means that the monad is closed under the "layer stacking" operation. This is how monad transformers work: they combine monads by providing "join-like" methods for types like
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
so that we can transform an action in (MaybeT m) into an action in m, effectively collapsing layers. In this case, runMaybeT :: MaybeT m a -> m (Maybe a) is our join-like method. (MaybeT m) is a monad, and MaybeT :: m (Maybe a) -> MaybeT m a is effectively a constructor for a new type of monad action in m.
A free monad for a functor is the monad generated by stacking f, with the implication that every sequence of constructors for f is an element of the free monad (or, more exactly, something with the same shape as the tree of sequences of constructors for f). Free monads are a useful technique for constructing flexible monads with a minimal amount of boiler-plate. In a Haskell program, I might use free monads to define simple monads for "high level system programming" to help maintain type safety (I'm just using types and their declarations. Implementations are straight-forward with the use of combinators):
data RandomF r a = GetRandom (r -> a) deriving Functor
type Random r a = Free (RandomF r) a
type RandomT m a = Random (m a) (m a) -- model randomness in a monad by computing random monad elements.
getRandom :: Random r r
runRandomIO :: Random r a -> IO a (use some kind of IO-based backend to run)
runRandomIO' :: Random r a -> IO a (use some other kind of IO-based backend)
runRandomList :: Random r a -> [a] (some kind of list-based backend (for pseudo-randoms))
Monadism is the underlying architecture for what you might call the "interpreter" or "command" pattern, abstracted to its clearest form, since every monadic computation must be "run", at least trivially. (The runtime system runs the IO monad for us, and is the entry point to any Haskell program. IO "drives" the rest of the computations, by running IO actions in order).
The type for join is also where we get the statement that a monad is a monoid in the category of endofunctors. Join is typically more important for theoretical purposes, in virtue of its type. But understanding the type means understanding monads. Join and monad transformer's join-like types are effectively compositions of endofunctors, in the sense of function composition. To put it in a Haskell-like pseudo-language,
Foo :: m (m a) <-> (m . m) a
Quick explanation:
Monads (in functional programming) are functions with context-dependent behaviour.
The context is passed as argument, being returned from a previous call of that monad. It makes it look like the same argument produces a different return value on subsequent calls.
Equivalent:
Monads are functions whose actual arguments depend on past calls of a call chain.
Typical example: Stateful functions.
FAQ
Wait, what do you mean with "behaviour"?
Behaviour means the return value and side effects that you get for specific inputs.
But what's so special about them?
In procedural semantics: nothing. But they are modelled solely using pure functions. It's because pure functional programming languages like Haskell only use pure functions which are not stateful by themselves.
But then, where comes the state from?
The statefulness comes from the sequentialness of the function-call execution. It allows nested functions to drag certain arguments around through multiple function calls. This simulates state. The monad is just a software pattern to hide these additional arguments behind return values of shiny functions, often called return and bind.
Why is input/output a monad in Haskell?
Because displayed text is a state in your operating system. If you read or write the same text multiple times, the state of the operating system will not be equal after each call. Instead, your output device will show 3 times the text output. For proper reactions to the OS, Haskell needs to model the OS state for itself as a monad.
Technically, you don't need the monad definition. Purely functional languages can use the idea of "uniqueness type"s for the same purpose.
Do monads exist in non-functional languages?
Yes, basically an interpreter is a complex monad, interpreting each instruction and mapping it to a new state in the OS.
Long explanation:
A monad (in functional programming) is a pure-functional software pattern. A monad is an automatically maintained environment (an object) in which a chain of pure function calls can be executed. The function results modify or interact with that environment.
In other words, a monad is a "function-repeater" or "function-chainer" which is chaining and evaluating argument values within an automatically maintained environment. Often the chained argument values are "update-functions" but actually could be any objects (with methods, or container elements which make up a container). The monad is the "glue code" executed before and after each evaluated argument. This glue code function "bind" is supposed to integrate each argument's environment output into the original environment.
Thus, the monad concatenates the results of all arguments in a way that is implementation-specific to a particular monad. Whether or how control and data flows between the arguments is also implementation-specific.
This intertwinned execution allows to model complete imperative control flow (as in a GOTO-program) or parallel execution with only pure functions, but also side effects, temporary state or exception handling between the function calls even though the applied functions don't know about the external environment.
EDIT: Note that monads can evaluate the function chain in any kind of control flow graph, even non-deterministic NFA-like manner because the remaining chain is evaluated lazily and can be evaluated multiple times at each point of the chain which allows for backtracking in the chain.
The reason to use the monad concept is the pure-functional paradigm which needs a tool to simulate typically impurely modelled behaviour in a pure way, not because they do something special.
Monads for OOP people
In OOP a monad is a typical object with
a constructor often called return that turns a value into an initial instance of the environment
a chainable argument application method often called bind which maintains the object's state with the returned environment of a function passed as argument.
Some people also mention a third function join which is part of bind. Because the "argument-functions" are evalutated within the environment, their result is nested in the environment itself. join is the last step to "un-nest" the result (flattens the environment) to replace the environment with a new one.
A monad can implement the Builder pattern but allows for much more general use.
Example (Python)
I think the most intuitive example for monads are relational operators from Python:
result = 0 <= x == y < 3
You see that it is a monad because it has to carry along some boolean state which is not known by individual relational operator calls.
If you think about how to implement it without short-circuiting behaviour on low level then you exactly will get a monad implementation:
# result = ret(0)
result = (0, true)
# result = result.bind(lambda v: (x, v <= x))
result[1] = result[1] and result[0] <= x
result[0] = x
# result = result.bind(lambda v: (y, v == y))
result[1] = result[1] and result[0] == y
result[0] = y
# result = result.bind(lambda v: (3, v < 3))
result[1] = result[1] and result[0] < 3
result[0] = 3
result = result[1] # not explicit part of a monad
A real monad would compute every argument at most once.
Now think away the "result" variable and you get this chain:
ret(0) .bind (lambda v: v <= x) .bind (lambda v: v == y) .bind (lambda v: v < 3)
Monads in typical usage are the functional equivalent of procedural programming's exception handling mechanisms.
In modern procedural languages, you put an exception handler around a sequence of statements, any of which may throw an exception. If any of the statements throws an exception, normal execution of the sequence of statements halts and transfers to an exception handler.
Functional programming languages, however, philosophically avoid exception handling features due to the "goto" like nature of them. The functional programming perspective is that functions should not have "side-effects" like exceptions that disrupt program flow.
In reality, side-effects cannot be ruled out in the real world due primarily to I/O. Monads in functional programming are used to handle this by taking a set of chained function calls (any of which might produce an unexpected result) and turning any unexpected result into encapsulated data that can still flow safely through the remaining function calls.
The flow of control is preserved but the unexpected event is safely encapsulated and handled.
In OO terms, a monad is a fluent container.
The minimum requirement is a definition of class <A> Something that supports a constructor Something(A a) and at least one method Something<B> flatMap(Function<A, Something<B>>)
Arguably, it also counts if your monad class has any methods with signature Something<B> work() which preserves the class's rules -- the compiler bakes in flatMap at compile time.
Why is a monad useful? Because it is a container that allows chain-able operations that preserve semantics. For example, Optional<?> preserves the semantics of isPresent for Optional<String>, Optional<Integer>, Optional<MyClass>, etc.
As a rough example,
Something<Integer> i = new Something("a")
.flatMap(doOneThing)
.flatMap(doAnother)
.flatMap(toInt)
Note we start with a string and end with an integer. Pretty cool.
In OO, it might take a little hand-waving, but any method on Something that returns another subclass of Something meets the criterion of a container function that returns a container of the original type.
That's how you preserve semantics -- i.e. the container's meaning and operations don't change, they just wrap and enhance the object inside the container.
I am sharing my understanding of Monads, which may not be theoretically perfect. Monads are about Context propagation. Monad is, you define some context for some data (or data type(s)), and then define how that context will be carried with the data throughout its processing pipeline. And defining context propagation is mostly about defining how to merge multiple contexts (of same type). Using Monads also means ensuring these contexts are not accidentally stripped off from the data. On the other hand, other context-less data can be brought into a new or existing context. Then this simple concept can be used to ensure compile time correctness of a program.
A monad is an array of functions
(Pst: an array of functions is just a computation).
Actually, instead of a true array (one function in one cell array) you have those functions chained by another function >>=. The >>= allows to adapt the results from function i to feed function i+1, perform calculations between them
or, even, not to call function i+1.
The types used here are "types with context". This is, a value with a "tag".
The functions being chained must take a "naked value" and return a tagged result.
One of the duties of >>= is to extract a naked value out of its context.
There is also the function "return", that takes a naked value and puts it with a tag.
An example with Maybe. Let's use it to store a simple integer on which make calculations.
-- a * b
multiply :: Int -> Int -> Maybe Int
multiply a b = return (a*b)
-- divideBy 5 100 = 100 / 5
divideBy :: Int -> Int -> Maybe Int
divideBy 0 _ = Nothing -- dividing by 0 gives NOTHING
divideBy denom num = return (quot num denom) -- quotient of num / denom
-- tagged value
val1 = Just 160
-- array of functions feeded with val1
array1 = val1 >>= divideBy 2 >>= multiply 3 >>= divideBy 4 >>= multiply 3
-- array of funcionts created with the do notation
-- equals array1 but for the feeded val1
array2 :: Int -> Maybe Int
array2 n = do
v <- divideBy 2 n
v <- multiply 3 v
v <- divideBy 4 v
v <- multiply 3 v
return v
-- array of functions,
-- the first >>= performs 160 / 0, returning Nothing
-- the second >>= has to perform Nothing >>= multiply 3 ....
-- and simply returns Nothing without calling multiply 3 ....
array3 = val1 >>= divideBy 0 >>= multiply 3 >>= divideBy 4 >>= multiply 3
main = do
print array1
print (array2 160)
print array3
Just to show that monads are array of functions with helper operations, consider
the equivalent to the above example, just using a real array of functions
type MyMonad = [Int -> Maybe Int] -- my monad as a real array of functions
myArray1 = [divideBy 2, multiply 3, divideBy 4, multiply 3]
-- function for the machinery of executing each function i with the result provided by function i-1
runMyMonad :: Maybe Int -> MyMonad -> Maybe Int
runMyMonad val [] = val
runMyMonad Nothing _ = Nothing
runMyMonad (Just val) (f:fs) = runMyMonad (f val) fs
And it would be used like this:
print (runMyMonad (Just 160) myArray1)
If you've ever used Powershell, the patterns Eric described should sound familiar. Powershell cmdlets are monads; functional composition is represented by a pipeline.
Jeffrey Snover's interview with Erik Meijer goes into more detail.
From a practical point of view (summarizing what has been said in many previous answers and related articles), it seems to me that one of the fundamental "purposes" (or usefulness) of the monad is to leverage the dependencies implicit in recursive method invocations aka function composition (i.e. when f1 calls f2 calls f3, f3 needs to be evaluated before f2 before f1) to represent sequential composition in a natural way, especially in the context of a lazy evaluation model (that is, sequential composition as a plain sequence, e.g. "f3(); f2(); f1();" in C - the trick is especially obvious if you think of a case where f3, f2 and f1 actually return nothing [their chaining as f1(f2(f3)) is artificial, purely intended to create sequence]).
This is especially relevant when side-effects are involved, i.e. when some state is altered (if f1, f2, f3 had no side-effects, it wouldn't matter in what order they're evaluated; which is a great property of pure functional languages, to be able to parallelize those computations for example). The more pure functions, the better.
I think from that narrow point of view, monads could be seen as syntactic sugar for languages that favor lazy evaluation (that evaluate things only when absolutely necessary, following an order that does not rely on the presentation of the code), and that have no other means of representing sequential composition. The net result is that sections of code that are "impure" (i.e. that do have side-effects) can be presented naturally, in an imperative manner, yet are cleanly separated from pure functions (with no side-effects), which can be evaluated lazily.
This is only one aspect though, as warned here.
A simple Monads explanation with a Marvel's case study is here.
Monads are abstractions used to sequence dependent functions that are effectful. Effectful here means they return a type in form F[A] for example Option[A] where Option is F, called type constructor. Let's see this in 2 simple steps
Below Function composition is transitive. So to go from A to C I can compose A => B and B => C.
A => C = A => B andThen B => C
However, if the function returns an effect type like Option[A] i.e. A => F[B] the composition doesn't work as to go to B we need A => B but we have A => F[B].
We need a special operator, "bind" that knows how to fuse these functions that return F[A].
A => F[C] = A => F[B] bind B => F[C]
The "bind" function is defined for the specific F.
There is also "return", of type A => F[A] for any A, defined for that specific F also. To be a Monad, F must have these two functions defined for it.
Thus we can construct an effectful function A => F[B] from any pure function A => B,
A => F[B] = A => B andThen return
but a given F can also define its own opaque "built-in" special functions of such types that a user can't define themself (in a pure language), like
"random" (Range => Random[Int])
"print" (String => IO[ () ])
"try ... catch", etc.
The simplest explanation I can think of is that monads are a way of composing functions with embelished results (aka Kleisli composition). An "embelished" function has the signature a -> (b, smth) where a and b are types (think Int, Bool) that might be different from each other, but not necessarily - and smth is the "context" or the "embelishment".
This type of functions can also be written a -> m b where m is equivalent to the "embelishment" smth. So these are functions that return values in context (think functions that log their actions, where smth is the logging message; or functions that perform input\output and their results depends on the result of the IO action).
A monad is an interface ("typeclass") that makes the implementer tell it how to compose such functions. The implementer needs to define a composition function (a -> m b) -> (b -> m c) -> (a -> m c) for any type m that wants to implement the interface (this is the Kleisli composition).
So, if we say that we have a tuple type (Int, String) representing results of computations on Ints that also log their actions, with (_, String) being the "embelishment" - the log of the action - and two functions increment :: Int -> (Int, String) and twoTimes :: Int -> (Int, String) we want to obtain a function incrementThenDouble :: Int -> (Int, String) which is the composition of the two functions that also takes into account the logs.
On the given example, a monad implementation of the two functions applies to integer value 2 incrementThenDouble 2 (which is equal to twoTimes (increment 2)) would return (6, " Adding 1. Doubling 3.") for intermediary results increment 2 equal to (3, " Adding 1.") and twoTimes 3 equal to (6, " Doubling 3.")
From this Kleisli composition function one can derive the usual monadic functions.
See my answer to "What is a monad?"
It begins with a motivating example, works through the example, derives an example of a monad, and formally defines "monad".
It assumes no knowledge of functional programming and it uses pseudocode with function(argument) := expression syntax with the simplest possible expressions.
This C++ program is an implementation of the pseudocode monad. (For reference: M is the type constructor, feed is the "bind" operation, and wrap is the "return" operation.)
#include <iostream>
#include <string>
template <class A> class M
{
public:
A val;
std::string messages;
};
template <class A, class B>
M<B> feed(M<B> (*f)(A), M<A> x)
{
M<B> m = f(x.val);
m.messages = x.messages + m.messages;
return m;
}
template <class A>
M<A> wrap(A x)
{
M<A> m;
m.val = x;
m.messages = "";
return m;
}
class T {};
class U {};
class V {};
M<U> g(V x)
{
M<U> m;
m.messages = "called g.\n";
return m;
}
M<T> f(U x)
{
M<T> m;
m.messages = "called f.\n";
return m;
}
int main()
{
V x;
M<T> m = feed(f, feed(g, wrap(x)));
std::cout << m.messages;
}
optional/maybe is the most fundamental monadic type
Monads are about function composition. If you have functions f:optional<A>->optional<B>, g:optional<B>->optional<C>,h:optional<C>->optional<D>. Then you could compose them
optional<A> opt;
h(g(f(opt)));
The benefit of monad types, is that you can instead compose f:A->optional<B>, g:B->optional<C>,h:C->optional<D>. They can do this because the monadic interface provides the bind operator
auto optional<A>::bind(A->optional<B>)->optional<B>
and the composition could be written
optional<A> opt
opt.bind(f)
.bind(g)
.bind(h)
The benefit of monads is that we no longer have to handle the logic of if(!opt) return nullopt; in each of f,g,h because this logic is moved into the bind operator.
ranges/lists/iterables are the second most fundamental monad type.
The monadic feature of ranges is we can transform then flatten i.e. Starting with a sentance enncoded as a range of integers [36, 98]
we can transform to [['m','a','c','h','i','n','e',' '], ['l','e','a','r','n','i','n','g', '.']]
and then flatten ['m','a','c','h','i','n','e', ' ', 'l','e','a','r','n','i','n','g','.']
Instead of writing this code
vector<string> lookup_table;
auto stringify(vector<unsigned> rng) -> vector<char>
{
vector<char> result;
for(unsigned key : rng)
for(char ch : lookup_table[key])
result.push_back(ch);
result.push_back(' ')
result.push_back('.')
return result
}
we could write write this
auto f(unsigned key) -> vector<char>
{
vector<char> result;
for(ch : lookup_table[key])
result.push_back(ch);
return result
}
auto stringify(vector<unsigned> rng) -> vector<char>
{
return rng.bind(f);
}
The monad pushes the for loop for(unsigned key : rng) up into the bind function, allowing for code that is easier to reason about, theoretically. Pythagorean triples can be generated in range-v3 with nested binds (rather than chained binds as we saw with optional)
auto triples =
for_each(ints(1), [](int z) {
return for_each(ints(1, z), [=](int x) {
return for_each(ints(x, z), [=](int y) {
return yield_if(x*x + y*y == z*z, std::make_tuple(x, y, z));
});
});
});