How to mock for testing in Haskell? - testing

Suppose I am defining a Haskell function f (either pure or an action) and somewhere within f I call function g. For example:
f = ...
g someParms
...
How do I replace function g with a mock version for unit testing?
If I were working in Java, g would be a method on class SomeServiceImpl that implements interface SomeService. Then, I'd use dependency injection to tell f to either use SomeServiceImpl or MockSomeServiceImpl. I'm not sure how to do this in Haskell.
Is the best way to do it to introduce a type class SomeService:
class SomeService a where
g :: a -> typeOfSomeParms -> gReturnType
data SomeServiceImpl = SomeServiceImpl
data MockSomeServiceImpl = MockSomeServiceImpl
instance SomeService SomeServiceImpl where
g _ someParms = ... -- real implementation of g
instance SomeService MockSomeServiceImpl where
g _ someParms = ... -- mock implementation of g
Then, redefine f as follows:
f someService ... = ...
g someService someParms
...
It seems like this would work, but I'm just learning Haskell and wondering if this is the best way to do this? More generally, I like the idea of dependency injection not just for mocking, but also to make code more customizable and reusable. Generally, I like the idea of not being locked into a single implementation for any of the services that a piece of code uses. Would it be considered a good idea to use the above trick extensively in code to get the benefits of dependency injection?
EDIT:
Let's take this one step further. Suppose I have a series of functions a, b, c, d, e, and f in a module that all need to be able to reference functions g, h, i, and j from a different module. And suppose I want to be able to mock functions g, h, i, and j. I could clearly pass the 4 functions in as parameters to a-f, but that's a bit of a pain to add the 4 parameters to all the functions. Plus, if I ever needed to change the implementation of any of a-f to call yet another method, I'd need to change its signature, which could create a nasty refactoring exercise.
Any tricks to making this type of situation work easily? For example, in Java, I could construct an object with all of its external services. The constructor would store the services off in member variables. Then, any of the methods could access those services via the member variables. So, as methods are added to services, none of the method signatures change. And if new services are needed, only the constructor method signature changes.

Why use unit testing when you can have Automated Specification-Based Testing? The QuickCheck library does this for you. It can generate arbitrary (mock) functions and data using the Arbitrary type-class.
"Dependency Injection" is a degenerate form of implicit parameter passing. In Haskell, you can use Reader, or Free to achieve the same thing in a more Haskelly way.

Another alternative:
{-# LANGUAGE FlexibleContexts, RankNTypes #-}
import Control.Monad.RWS
data (Monad m) => ServiceImplementation m = ServiceImplementation
{ serviceHello :: m ()
, serviceGetLine :: m String
, servicePutLine :: String -> m ()
}
serviceHelloBase :: (Monad m) => ServiceImplementation m -> m ()
serviceHelloBase impl = do
name <- serviceGetLine impl
servicePutLine impl $ "Hello, " ++ name
realImpl :: ServiceImplementation IO
realImpl = ServiceImplementation
{ serviceHello = serviceHelloBase realImpl
, serviceGetLine = getLine
, servicePutLine = putStrLn
}
mockImpl :: (Monad m, MonadReader String m, MonadWriter String m) =>
ServiceImplementation m
mockImpl = ServiceImplementation
{ serviceHello = serviceHelloBase mockImpl
, serviceGetLine = ask
, servicePutLine = tell
}
main = serviceHello realImpl
test = case runRWS (serviceHello mockImpl) "Dave" () of
(_, _, "Hello, Dave") -> True; _ -> False
This is actually one of the many ways to create OO-styled code in Haskell.

To follow up on the edit asking about multiple functions, one option is to just put them in a record type and pass the record in. Then you can add new ones just by updating the record type. For example:
data FunctionGroup t = FunctionGroup { g :: Int -> Int, h :: t -> Int }
a grp ... = ... g grp someThing ... h grp someThingElse ...
Another option that might be viable in some cases is to use type classes. For example:
class HasFunctionGroup t where
g :: Int -> t
h :: t -> Int
a :: HasFunctionGroup t => <some type involving t>
a ... = ... g someThing ... h someThingElse
This only works if you can find a type (or multiple types if you use multi-parameter type classes) that the functions have in common, but in cases where it is appropriate it will give you nice idiomatic Haskell.

Couldn't you just pass a function named g to f? As long as g satisfies the interface typeOfSomeParms -> gReturnType, then you should be able to pass in the real function or a mock function.
eg
f g = do
...
g someParams
...
I have not used dependency injection in Java myself, but the texts I have read made it sound a lot like passing higher-order functions, so maybe this will do what you want.
Response to edit: ephemient's answer is better if you need to solve the problem in an enterprisey way, because you define a type containing multiple functions. The prototyping way I propose would just pass a tuple of functions without defining a containing type. But then I hardly ever write type annotations, so refactoring that is not very hard.

A simple solution would be to change your
f x = ...
to
f2 g x = ...
and then
f = f2 g
ftest = f2 gtest

If the functions you depend on are in another module then you could play games with visible module configurations so that either the real module or a mock module is imported.
However I'd like to ask why you feel the need to use mock functions for unit testing anyway. You simply want to demonstrate that the module you are working on does its job. So first prove that your lower level module (the one you want to mock) works, and then build your new module on top of it and demonstrate that works too.
Of course this assumes that you aren't working with monadic values, so it doesn't matter what gets called or with what parameters. In that case you probably need to demonstrate that the right side effects are being invoked at the right time, so monitoring what gets called when is necessary.
Or are you just working to a corporate standard that demands that unit tests only exercise a single module with the whole rest of the system being mocked? That is a very poor way of testing. Far better to build your modules from the bottom up, demonstrating at each level that the modules meet their specs before proceeding to the next level. Quickcheck is your friend here.

You could just have your two function implementations with different names, and g would be a variable that is either defined to be one or the other as you need.
g :: typeOfSomeParms -> gReturnType
g = g_mock -- change this to "g_real" when you need to
g_mock someParms = ... -- mock implementation of g
g_real someParms = ... -- real implementation of g

Related

F# equivalent to Kotlin's ?. operator

I just started my first F# project and coming from the JVM world, I really like Kotlin's nullability syntax and was wondering how I could achieve similarily compact syntax in F#.
Here's an example:
class MyClass {
fun doSomething() {
// ...
}
}
// At some other place in the code:
val myNullableValue: MyClass? = null
myNullableVallue?.doSomething()
What this does:
If myNullableValue is not null, i.e. there is some data, doSomething() is called on that object.
If myNullableValue is null (like in the code above), nothing happens.
As far as I see, the F# equivalent would be:
type MyClass =
member this.doSomething() = ()
type CallingCode() =
let callingCode() =
let myOptionalValue: MyClass option = None
match myOptionalValue with
|Some(x) -> x.doSomething()
|None -> ()
A stamement that is 1 line long in Kotlin is 3 lines long in F#. My question is therefore whether there's a shorter syntax that acomplishes the same thing.
There is no built-in operator for doing this in F# at the moment. I suspect that the reason is that working with undefined values is just less frequent in F#. For example, you would never define a variable, initialize it to null and then have some code that may or may not set it to a value in F#, so the usual way of writing F# eliminates many of the needs for such operator.
You still need to do this sometimes, for example when using option to represent something that can legitimately be missing, but I think this is less frequent than in other languages. You also may need something like this when interacting with .NET, but then it's probably good practice to handle nulls first, before doing anything else.
Aside from pattern matching, you can use Option.map or an F# computation expression (there is no standard one, but it's easy to use a library or define one - see for example). Then you can write:
let myOptionalValue: MyClass option = None
// Option #1: Using the `opt` computation expression
opt { let! v = myOptionalValue
return v.doSomething() }
// Option #2: Using the `Option.map` function
myOptionalValue |> Option.map (fun v -> v.doSomething() )
For reference, my definition of opt is:
type OptionBuilder() =
member x.Bind(v,f) = Option.bind f v
member x.Return v = Some v
member x.ReturnFrom o = o
member x.Zero () = None
let opt = OptionBuilder()
The ?. operator has been suggested to be added to F#.
https://github.com/fsharp/fslang-suggestions/issues/14
Some day it will be added, I hope soon.

F# and ILNumerics

I have just downloaded the last version of ILNumerics, to be used in my F# project. Is it possible to leverage on this library in F#? I have tried simple computations and it seems very cumbersome (in F#).
I would like to set up a constrained (or even unconstrained) optimization problem. The usual Rosenbrock function would do and then I will use my own function. I am having hard times in having even an Array being defined. The only kind of array I could define was a RetArray, for example with this code
let vector = ILMath.vector<float>(1.0, 2.0)
The compiler signals that vector is a RetArray; I think this is due to the fact that it is returning from a function (i.e.: ILMath.vector). If I define another similar vector, I can -e.g.- sum vectors, simply writing, for example
let a = ILMath.vector<float>(1.0, 2.0)
let b = ILMath.vector<float>(3.2,2.2)
let c = a + b
and I get
RetArray<float> = seq [4.2; 4.2]
but if I try to retrieve the value of c, again, writing, for example in FSI,
c;;
I get
Error: Object reference not set to an instance of an object.
What is the suggested way of using ILNumerics in F#? Is it possible to use the library natively in F# or I am forced to call my F# code from a C# library to use the whole ILNumerics library? Other than with the problem cited, I have problems in understanding the very basic logic of ILNumerics, when ported in F#.
For example, what would be the F# equivalent of the C# using scope as in the example code, as in:
using (ILScope.Enter(inData)) { ...
}
Just to elaborate a bit on brianberns' answer, there are a couple of things you could do to make it easier for yourself.
I would personally not go the route of defining a custom operator - especially one that overrides an existing one. Instead, perhaps you should consider using a computation expression to work with the ILMath types. This will allow you to hide a lot of the uglyness, that comes when working with libraries making use of non-F# standards (e.g. implicit type conversions).
I don't have access to ILMath, so I have just implemented these dummy alternatives, in order to get my code to compile. I suspect you should be able to just not copy that in, and the rest of the code will work as intended
module ILMath =
type RetArray<'t> = { Values: 't seq }
and Array<'t> = { OtherValues: 't seq } with
static member op_Implicit(x: RetArray<_>) = { OtherValues = x.Values }
static member inline (+) (x1, x2) = { Values = (x1.OtherValues, x2.OtherValues) ||> Seq.map2 (+) }
type ILMath =
static member vector<'t>([<ParamArray>] vs : 't []) = { ILMath.Values = vs }
If you have never seen or implemented a computation expression before, you should check the documentation I referenced. Basically, it adds some nice, syntactic sugar on top of some uglyness, in a way that you decide. My sample implementation adds just the let! (desugars to Bind) and return (desugars to Return, duh) key words.
type ILMathBuilder() =
member __.Bind(x: ILMath.RetArray<'t>, f) =
f(ILMath.Array<'t>.op_Implicit(x))
member __.Return(x: ILMath.RetArray<'t>) =
ILMath.Array<'t>.op_Implicit(x)
let ilmath = ILMathBuilder()
This should be defined and instantiated (the ilmath variable) at the top level. This allows you to write
let c = ilmath {
let! a = vector(1.0, 2.0)
let! b = vector(3.2, 2.2)
return a + b
}
Of course, this implementation adds only support for very few things, and requires, for instance, that a value of type RetArray<'t> is always returned. Extending the ILMathBuilder type according to the documentation is the way to go from here.
The reason that the second access of c fails is that ILNumerics is doing some very unusual memory management, which automatically releases the vector's memory when you might not expect it. In C#, this is managed via implicit conversion from vector to Array:
// C#
var A = vector<int>(1, 2, 3); // bad!
Array<int> A = vector<int>(1, 2, 3); // good
F# doesn't have implicit type conversions, but you can invoke the op_Implicit member manually, like this:
open ILNumerics
open type ILMath // open static class - new feature in F# 5
let inline (!) (x : RetArray<'t>) =
Array<'t>.op_Implicit(x)
[<EntryPoint>]
let main argv =
let a = !vector<float>(1.0, 2.0)
let b = !vector<float>(3.2,2.2)
let c = !(a + b)
printfn "%A" c
printfn "%A" c
0
Note that I've created an inline helper function called ! to make this easier. Every time you create an ILNumerics vector in F#, you must call this function to convert it to an array. (It's ugly, I know, but I don't see an easier alternative.)
To answer your last question, the equivalent F# code is:
use _scope = Scope.Enter(inData)
...

Why does smallCheck's `Series` class have two types in the constructor?

This question is related to my other question about smallCheck's Test.SmallCheck.Series class. When I try to define an instance of the class Serial in the following natural way (suggested to me by an answer by #tel to the above question), I get compiler errors:
data Person = SnowWhite | Dwarf Int
instance Serial Person where ...
It turns out that Serial wants to have two arguments. This, in turn, necessitates a some compiler flags. The following works:
{-# LANGUAGE FlexibleInstances, MultiParamTypeClasses #-}
import Test.SmallCheck
import Test.SmallCheck.Series
import Control.Monad.Identity
data Person = SnowWhite | Dwarf Int
instance Serial Identity Person where
series = generate (\d -> SnowWhite : take (d-1) (map Dwarf [1..7]))
My question is:
Was putting that Identity there the "right thing to do"? I was inspired by the type of the Test.Series.list function (which I also found extremely bizarre when I first saw it):
list :: Depth -> Series Identity a -> [a]
What is the right thing to do? Will I be OK if I just blindly put Identity in whenever I see it? Should I have put something like Serial m Integer => Serial m Person instead (that necessitates some more scary-looking compiler flags: FlexibleContexts and UndecidableInstances at least)?
What is that first parameter (the m in Serial m n) for?
Thank you!
I'm just an user of smallcheck and not a developer, but I think the answer is
1) Not really. You should leave it polymorphic, which you can do without the said extensions:
{-# LANGUAGE FlexibleInstances, MultiParamTypeClasses #-}
import Test.SmallCheck
import Test.SmallCheck.Series
import Control.Monad.Identity
data Person = SnowWhite | Dwarf Int deriving (Show)
instance (Monad m) => Serial m Person where
series = generate (\d -> SnowWhite : take (d-1) (map Dwarf [1..7]))
2) Series is currently defined as
newtype Series m a = Series (ReaderT Depth (LogicT m) a)
which means that mis the base monad for LogicT which is used to generate the values in the series. For example, writing IO in place of m would allow IO actions to happen while generating the series.
In SmallCheck, m appears also in the Testable instance declarations, such as instance (Serial m a, Show a, Testable m b) => Testable m (a->b). This has the concrete effect that the pre-existing driver functions such as smallCheck :: Testable IO a => Depth -> a -> IO () cannot be used if you only have instances for Identity.
In practice, you could make use of this fact by writing a custom driver function which
interleaves some monadic effect like logging of the generated values (or some such) inside the said driver.
It might also be useful for other things which I'm not aware of.

Using function like OOP method in Haskell

I'm studying Haskell these days, and got a idea using function like OOP method.
First, define a operator as below:
(//) :: a -> (a -> b) -> b
x // f = f x
As you can see, this operator reverses the order of function f and argument x then applies it. For example, equals can be defined as:
equals :: Eq a => a -> a -> Bool
equals = \x -> \y -> x == y
comparison = (1 + 2) // equals 3 -- True
Now here is my question. Is it good approach using Haskell function like OOP method? That means inversion of function and (first) argument is good or not.
UPDATE
Here is the case of backtick(`) is not available.
data Person = Person { name :: String }
myself = Person "leafriend"
then
> name myself
"leafriend"
> myself // name
"lefirend"
> myself `name`
ERROR - Syntax error in expression (unexpected end of input)
What you have written is merely a matter of style. (It seems rather cruel to C/Java programmers to use // as an operator, though.) I would discourage using this for a few reasons, mainly because:
It's not idiomatic Haskell, and will confuse any Haskellers trying to read your code
You shouldn't think of Haskell as you would think of an OOP language
If you're trying to make Haskell look more like your favorite OOP language, you're doing it wrong. Haskell is meant to look like math: kind of lisp-y, with prefix function application, but also with infix functions and operators available.

Monad in plain English? (For the OOP programmer with no FP background)

In terms that an OOP programmer would understand (without any functional programming background), what is a monad?
What problem does it solve and what are the most common places it's used?
Update
To clarify the kind of understanding I was looking for, let’s say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads to the OOP app?
UPDATE: This question was the subject of an immensely long blog series, which you can read at Monads — thanks for the great question!
In terms that an OOP programmer would understand (without any functional programming background), what is a monad?
A monad is an "amplifier" of types that obeys certain rules and which has certain operations provided.
First, what is an "amplifier of types"? By that I mean some system which lets you take a type and turn it into a more special type. For example, in C# consider Nullable<T>. This is an amplifier of types. It lets you take a type, say int, and add a new capability to that type, namely, that now it can be null when it couldn't before.
As a second example, consider IEnumerable<T>. It is an amplifier of types. It lets you take a type, say, string, and add a new capability to that type, namely, that you can now make a sequence of strings out of any number of single strings.
What are the "certain rules"? Briefly, that there is a sensible way for functions on the underlying type to work on the amplified type such that they follow the normal rules of functional composition. For example, if you have a function on integers, say
int M(int x) { return x + N(x * 2); }
then the corresponding function on Nullable<int> can make all the operators and calls in there work together "in the same way" that they did before.
(That is incredibly vague and imprecise; you asked for an explanation that didn't assume anything about knowledge of functional composition.)
What are the "operations"?
There is a "unit" operation (confusingly sometimes called the "return" operation) that takes a value from a plain type and creates the equivalent monadic value. This, in essence, provides a way to take a value of an unamplified type and turn it into a value of the amplified type. It could be implemented as a constructor in an OO language.
There is a "bind" operation that takes a monadic value and a function that can transform the value, and returns a new monadic value. Bind is the key operation that defines the semantics of the monad. It lets us transform operations on the unamplified type into operations on the amplified type, that obeys the rules of functional composition mentioned before.
There is often a way to get the unamplified type back out of the amplified type. Strictly speaking this operation is not required to have a monad. (Though it is necessary if you want to have a comonad. We won't consider those further in this article.)
Again, take Nullable<T> as an example. You can turn an int into a Nullable<int> with the constructor. The C# compiler takes care of most nullable "lifting" for you, but if it didn't, the lifting transformation is straightforward: an operation, say,
int M(int x) { whatever }
is transformed into
Nullable<int> M(Nullable<int> x)
{
if (x == null)
return null;
else
return new Nullable<int>(whatever);
}
And turning a Nullable<int> back into an int is done with the Value property.
It's the function transformation that is the key bit. Notice how the actual semantics of the nullable operation — that an operation on a null propagates the null — is captured in the transformation. We can generalize this.
Suppose you have a function from int to int, like our original M. You can easily make that into a function that takes an int and returns a Nullable<int> because you can just run the result through the nullable constructor. Now suppose you have this higher-order method:
static Nullable<T> Bind<T>(Nullable<T> amplified, Func<T, Nullable<T>> func)
{
if (amplified == null)
return null;
else
return func(amplified.Value);
}
See what you can do with that? Any method that takes an int and returns an int, or takes an int and returns a Nullable<int> can now have the nullable semantics applied to it.
Furthermore: suppose you have two methods
Nullable<int> X(int q) { ... }
Nullable<int> Y(int r) { ... }
and you want to compose them:
Nullable<int> Z(int s) { return X(Y(s)); }
That is, Z is the composition of X and Y. But you cannot do that because X takes an int, and Y returns a Nullable<int>. But since you have the "bind" operation, you can make this work:
Nullable<int> Z(int s) { return Bind(Y(s), X); }
The bind operation on a monad is what makes composition of functions on amplified types work. The "rules" I handwaved about above are that the monad preserves the rules of normal function composition; that composing with identity functions results in the original function, that composition is associative, and so on.
In C#, "Bind" is called "SelectMany". Take a look at how it works on the sequence monad. We need to have two things: turn a value into a sequence and bind operations on sequences. As a bonus, we also have "turn a sequence back into a value". Those operations are:
static IEnumerable<T> MakeSequence<T>(T item)
{
yield return item;
}
// Extract a value
static T First<T>(IEnumerable<T> sequence)
{
// let's just take the first one
foreach(T item in sequence) return item;
throw new Exception("No first item");
}
// "Bind" is called "SelectMany"
static IEnumerable<T> SelectMany<T>(IEnumerable<T> seq, Func<T, IEnumerable<T>> func)
{
foreach(T item in seq)
foreach(T result in func(item))
yield return result;
}
The nullable monad rule was "to combine two functions that produce nullables together, check to see if the inner one results in null; if it does, produce null, if it does not, then call the outer one with the result". That's the desired semantics of nullable.
The sequence monad rule is "to combine two functions that produce sequences together, apply the outer function to every element produced by the inner function, and then concatenate all the resulting sequences together". The fundamental semantics of the monads are captured in the Bind/SelectMany methods; this is the method that tells you what the monad really means.
We can do even better. Suppose you have a sequences of ints, and a method that takes ints and results in sequences of strings. We could generalize the binding operation to allow composition of functions that take and return different amplified types, so long as the inputs of one match the outputs of the other:
static IEnumerable<U> SelectMany<T,U>(IEnumerable<T> seq, Func<T, IEnumerable<U>> func)
{
foreach(T item in seq)
foreach(U result in func(item))
yield return result;
}
So now we can say "amplify this bunch of individual integers into a sequence of integers. Transform this particular integer into a bunch of strings, amplified to a sequence of strings. Now put both operations together: amplify this bunch of integers into the concatenation of all the sequences of strings." Monads allow you to compose your amplifications.
What problem does it solve and what are the most common places it's used?
That's rather like asking "what problems does the singleton pattern solve?", but I'll give it a shot.
Monads are typically used to solve problems like:
I need to make new capabilities for this type and still combine old functions on this type to use the new capabilities.
I need to capture a bunch of operations on types and represent those operations as composable objects, building up larger and larger compositions until I have just the right series of operations represented, and then I need to start getting results out of the thing
I need to represent side-effecting operations cleanly in a language that hates side effects
C# uses monads in its design. As already mentioned, the nullable pattern is highly akin to the "maybe monad". LINQ is entirely built out of monads; the SelectMany method is what does the semantic work of composition of operations. (Erik Meijer is fond of pointing out that every LINQ function could actually be implemented by SelectMany; everything else is just a convenience.)
To clarify the kind of understanding I was looking for, let's say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads into the OOP app?
Most OOP languages do not have a rich enough type system to represent the monad pattern itself directly; you need a type system that supports types that are higher types than generic types. So I wouldn't try to do that. Rather, I would implement generic types that represent each monad, and implement methods that represent the three operations you need: turning a value into an amplified value, (maybe) turning an amplified value into a value, and transforming a function on unamplified values into a function on amplified values.
A good place to start is how we implemented LINQ in C#. Study the SelectMany method; it is the key to understanding how the sequence monad works in C#. It is a very simple method, but very powerful!
Suggested, further reading:
For a more in-depth and theoretically sound explanation of monads in C#, I highly recommend my (Eric Lippert's) colleague Wes Dyer's article on the subject. This article is what explained monads to me when they finally "clicked" for me.
The Marvels of Monads
A good illustration of why you might want a monad around (uses Haskell in it's examples).
You Could Have Invented Monads! (And Maybe You Already Have.) by Dan Piponi
Sort of, "translation" of the previous article to JavaScript.
Translation from Haskell to JavaScript of selected portions of the best introduction to monads I’ve ever read by James Coglan
Why do we need monads?
We want to program only using functions. ("functional programming" after all -FP).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP. How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have composers/linkers g >>= f to help connecting g's output to f's input, so we don't have to change f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness !!!!
I would say the closest OO analogy to monads is the "command pattern".
In the command pattern you wrap an ordinary statement or expression in a command object. The command object expose an execute method which executes the wrapped statement. So statement are turned into first class objects which can passed around and executed at will. Commands can be composed so you can create a program-object by chaining and nesting command-objects.
The commands are executed by a separate object, the invoker. The benefit of using the command pattern (rather than just execute a series of ordinary statements) is that different invokers can apply different logic to how the commands should be executed.
The command pattern could be used to add (or remove) language features which is not supported by the host language. For example, in a hypothetical OO language without exceptions, you could add exception semantics by exposing "try" and "throw" methods to the commands. When a command calls throw, the invoker backtracks through the list (or tree) of commands until the last "try" call. Conversely, you could remove exception semantic from a language (if you believe exceptions are bad) by catching all exceptions thrown by each individual commands, and turning them into error codes which are then passed to the next command.
Even more fancy execution semantics like transactions, non-deterministic execution or continuations can be implemented like this in a language which doesn't support it natively. It is a pretty powerful pattern if you think about it.
Now in reality the command-patterns is not used as a general language feature like this. The overhead of turning each statement into a separate class would lead to an unbearable amount of boilerplate code. But in principle it can be used to solve the same problems as monads are used to solve in fp.
In terms that an OOP programmer would
understand (without any functional
programming background), what is a
monad?
What problem does it solve and what
are the most common places it's used?are the most common places it's used?
In terms of OO programming, a monad is an interface (or more likely a mixin), parameterized by a type, with two methods, return and bind that describe:
How to inject a value to get a
monadic value of that injected value
type;
How to use a function that
makes a monadic value from a
non-monadic one, on a monadic value.
The problem it solves is the same type of problem you'd expect from any interface, namely,
"I have a bunch of different classes that do different things, but seem to do those different things in a way that has an underlying similarity. How can I describe that similarity between them, even if the classes themselves aren't really subtypes of anything closer than 'the Object' class itself?"
More specifically, the Monad "interface" is similar to IEnumerator or IIterator in that it takes a type that itself takes a type. The main "point" of Monad though is being able to connect operations based on the interior type, even to the point of having a new "internal type", while keeping - or even enhancing - the information structure of the main class.
You have a recent presentation "Monadologie -- professional help on type anxiety" by Christopher League (July 12th, 2010), which is quite interesting on topics of continuation and monad.
The video going with this (slideshare) presentation is actually available at vimeo.
The Monad part start around 37 minutes in, on this one hour video, and starts with slide 42 of its 58 slide presentation.
It is presented as "the leading design pattern for functional programming", but the language used in the examples is Scala, which is both OOP and functional.
You can read more on Monad in Scala in the blog post "Monads - Another way to abstract computations in Scala", from Debasish Ghosh (March 27, 2008).
A type constructor M is a monad if it supports these operations:
# the return function
def unit[A] (x: A): M[A]
# called "bind" in Haskell
def flatMap[A,B] (m: M[A]) (f: A => M[B]): M[B]
# Other two can be written in term of the first two:
def map[A,B] (m: M[A]) (f: A => B): M[B] =
flatMap(m){ x => unit(f(x)) }
def andThen[A,B] (ma: M[A]) (mb: M[B]): M[B] =
flatMap(ma){ x => mb }
So for instance (in Scala):
Option is a monad
def unit[A] (x: A): Option[A] = Some(x)
def flatMap[A,B](m:Option[A])(f:A =>Option[B]): Option[B] =
m match {
case None => None
case Some(x) => f(x)
}
List is Monad
def unit[A] (x: A): List[A] = List(x)
def flatMap[A,B](m:List[A])(f:A =>List[B]): List[B] =
m match {
case Nil => Nil
case x::xs => f(x) ::: flatMap(xs)(f)
}
Monad are a big deal in Scala because of convenient syntax built to take advantage of Monad structures:
for comprehension in Scala:
for {
i <- 1 to 4
j <- 1 to i
k <- 1 to j
} yield i*j*k
is translated by the compiler to:
(1 to 4).flatMap { i =>
(1 to i).flatMap { j =>
(1 to j).map { k =>
i*j*k }}}
The key abstraction is the flatMap, which binds the computation through chaining.
Each invocation of flatMap returns the same data structure type (but of different value), that serves as the input to the next command in chain.
In the above snippet, flatMap takes as input a closure (SomeType) => List[AnotherType] and returns a List[AnotherType]. The important point to note is that all flatMaps take the same closure type as input and return the same type as output.
This is what "binds" the computation thread - every item of the sequence in the for-comprehension has to honor this same type constraint.
If you take two operations (that may fail) and pass the result to the third, like:
lookupVenue: String => Option[Venue]
getLoggedInUser: SessionID => Option[User]
reserveTable: (Venue, User) => Option[ConfNo]
but without taking advantage of Monad, you get convoluted OOP-code like:
val user = getLoggedInUser(session)
val confirm =
if(!user.isDefined) None
else lookupVenue(name) match {
case None => None
case Some(venue) =>
val confno = reserveTable(venue, user.get)
if(confno.isDefined)
mailTo(confno.get, user.get)
confno
}
whereas with Monad, you can work with the actual types (Venue, User) like all the operations work, and keep the Option verification stuff hidden, all because of the flatmaps of the for syntax:
val confirm = for {
venue <- lookupVenue(name)
user <- getLoggedInUser(session)
confno <- reserveTable(venue, user)
} yield {
mailTo(confno, user)
confno
}
The yield part will only be executed if all three functions have Some[X]; any None would directly be returned to confirm.
So:
Monads allow ordered computation within Functional Programing, that allows us to model sequencing of actions in a nice structured form, somewhat like a DSL.
And the greatest power comes with the ability to compose monads that serve different purposes, into extensible abstractions within an application.
This sequencing and threading of actions by a monad is done by the language compiler that does the transformation through the magic of closures.
By the way, Monad is not only model of computation used in FP:
Category theory proposes many models of computation. Among them
the Arrow model of computations
the Monad model of computations
the Applicative model of computations
To respect fast readers, I start with precise definition first,
continue with quick more "plain English" explanation, and then move to examples.
Here is a both concise and precise definition slightly reworded:
A monad (in computer science) is formally a map that:
sends every type X of some given programming language to a new type T(X) (called the "type of T-computations with values in X");
equipped with a rule for composing two functions of the form
f:X->T(Y) and g:Y->T(Z) to a function g∘f:X->T(Z);
in a way that is associative in the evident sense and unital with respect to a given unit function called pure_X:X->T(X), to be thought of as taking a value to the pure computation that simply returns that value.
So in simple words, a monad is a rule to pass from any type X to another type T(X), and a rule to pass from two functions f:X->T(Y) and g:Y->T(Z) (that you would like to compose but can't) to a new function h:X->T(Z). Which, however, is not the composition in strict mathematical sense. We are basically "bending" function's composition or re-defining how functions are composed.
Plus, we require the monad's rule of composing to satisfy the "obvious" mathematical axioms:
Associativity: Composing f with g and then with h (from outside) should be the same as composing g with h and then with f (from inside).
Unital property: Composing f with the identity function on either side should yield f.
Again, in simple words, we can't just go crazy re-defining our function composition as we like:
We first need the associativity to be able to compose several functions in a row e.g. f(g(h(k(x))), and not to worry about specifying the order composing function pairs. As the monad rule only prescribes how to compose a pair of functions, without that axiom, we would need to know which pair is composed first and so on. (Note that is different from the commutativity property that f composed with g were the same as g composed with f, which is not required).
And second, we need the unital property, which is simply to say that identities compose trivially the way we expect them. So we can safely refactor functions whenever those identities can be extracted.
So again in brief: A monad is the rule of type extension and composing functions satisfying the two axioms -- associativity and unital property.
In practical terms, you want the monad to be implemented for you by the language, compiler or framework that would take care of composing functions for you. So you can focus on writing your function's logic rather than worrying how their execution is implemented.
That is essentially it, in a nutshell.
Being professional mathematician, I prefer to avoid calling h the "composition" of f and g. Because mathematically, it isn't. Calling it the "composition" incorrectly presumes that h is the true mathematical composition, which it isn't. It is not even uniquely determined by f and g. Instead, it is the result of our monad's new "rule of composing" the functions. Which can be totally different from the actual mathematical composition even if the latter exists!
To make it less dry, let me try to illustrate it by example
that I am annotating with small sections, so you can skip right to the point.
Exception throwing as Monad examples
Suppose we want to compose two functions:
f: x -> 1 / x
g: y -> 2 * y
But f(0) is not defined, so an exception e is thrown. Then how can you define the compositional value g(f(0))? Throw an exception again, of course! Maybe the same e. Maybe a new updated exception e1.
What precisely happens here? First, we need new exception value(s) (different or same). You can call them nothing or null or whatever but the essence remains the same -- they should be new values, e.g. it should not be a number in our example here. I prefer not to call them null to avoid confusion with how null can be implemented in any specific language. Equally I prefer to avoid nothing because it is often associated with null, which, in principle, is what null should do, however, that principle often gets bended for whatever practical reasons.
What is exception exactly?
This is a trivial matter for any experienced programmer but I'd like to drop few words just to extinguish any worm of confusion:
Exception is an object encapsulating information about how the invalid result of execution occurred.
This can range from throwing away any details and returning a single global value (like NaN or null) or generating a long log list or what exactly happened, send it to a database and replicating all over the distributed data storage layer ;)
The important difference between these two extreme examples of exception is that in the first case there are no side-effects. In the second there are. Which brings us to the (thousand-dollar) question:
Are exceptions allowed in pure functions?
Shorter answer: Yes, but only when they don't lead to side-effects.
Longer answer. To be pure, your function's output must be uniquely determined by its input. So we amend our function f by sending 0 to the new abstract value e that we call exception. We make sure that value e contains no outside information that is not uniquely determined by our input, which is x. So here is an example of exception without side-effect:
e = {
type: error,
message: 'I got error trying to divide 1 by 0'
}
And here is one with side-effect:
e = {
type: error,
message: 'Our committee to decide what is 1/0 is currently away'
}
Actually, it only has side-effects if that message can possibly change in the future. But if it is guaranteed to never change, that value becomes uniquely predictable, and so there is no side-effect.
To make it even sillier. A function returning 42 ever is clearly pure. But if someone crazy decides to make 42 a variable that value might change, the very same function stops being pure under the new conditions.
Note that I am using the object literal notation for simplicity to demonstrate the essence. Unfortunately things are messed-up in languages like JavaScript, where error is not a type that behaves the way we want here with respect to function composition, whereas actual types like null or NaN do not behave this way but rather go through the some artificial and not always intuitive type conversions.
Type extension
As we want to vary the message inside our exception, we are really declaring a new type E for the whole exception object and then
That is what the maybe number does, apart from its confusing name, which is to be either of type number or of the new exception type E, so it is really the union number | E of number and E. In particular, it depends on how we want to construct E, which is neither suggested nor reflected in the name maybe number.
What is functional composition?
It is the mathematical operation taking functions
f: X -> Y and g: Y -> Z and constructing
their composition as function h: X -> Z satisfying h(x) = g(f(x)).
The problem with this definition occurs when the result f(x) is not allowed as argument of g.
In mathematics those functions cannot be composed without extra work.
The strictly mathematical solution for our above example of f and g is to remove 0 from the set of definition of f. With that new set of definition (new more restrictive type of x), f becomes composable with g.
However, it is not very practical in programming to restrict the set of definition of f like that. Instead, exceptions can be used.
Or as another approach, artificial values are created like NaN, undefined, null, Infinity etc. So you evaluate 1/0 to Infinity and 1/-0 to -Infinity. And then force the new value back into your expression instead of throwing exception. Leading to results you may or may not find predictable:
1/0 // => Infinity
parseInt(Infinity) // => NaN
NaN < 0 // => false
false + 1 // => 1
And we are back to regular numbers ready to move on ;)
JavaScript allows us to keep executing numerical expressions at any costs without throwing errors as in the above example. That means, it also allows to compose functions. Which is exactly what monad is about - it is a rule to compose functions satisfying the axioms as defined at the beginning of this answer.
But is the rule of composing function, arising from JavaScript's implementation for dealing with numerical errors, a monad?
To answer this question, all you need is to check the axioms (left as exercise as not part of the question here;).
Can throwing exception be used to construct a monad?
Indeed, a more useful monad would instead be the rule prescribing
that if f throws exception for some x, so does its composition with any g. Plus make the exception E globally unique with only one possible value ever (terminal object in category theory). Now the two axioms are instantly checkable and we get a very useful monad. And the result is what is well-known as the maybe monad.
A monad is a data type that encapsulates a value, and to which, essentially, two operations can be applied:
return x creates a value of the monad type that encapsulates x
m >>= f (read it as "the bind operator") applies the function f to the value in the monad m
That's what a monad is. There are a few more technicalities, but basically those two operations define a monad. The real question is, "What a monad does?", and that depends on the monad — lists are monads, Maybes are monads, IO operations are monads. All that it means when we say those things are monads is that they have the monad interface of return and >>=.
From wikipedia:
In functional programming, a monad is
a kind of abstract data type used to
represent computations (instead of
data in the domain model). Monads
allow the programmer to chain actions
together to build a pipeline, in which
each action is decorated with
additional processing rules provided
by the monad. Programs written in
functional style can make use of
monads to structure procedures that
include sequenced operations,1[2]
or to define arbitrary control flows
(like handling concurrency,
continuations, or exceptions).
Formally, a monad is constructed by
defining two operations (bind and
return) and a type constructor M that
must fulfill several properties to
allow the correct composition of
monadic functions (i.e. functions that
use values from the monad as their
arguments). The return operation takes
a value from a plain type and puts it
into a monadic container of type M.
The bind operation performs the
reverse process, extracting the
original value from the container and
passing it to the associated next
function in the pipeline.
A programmer will compose monadic
functions to define a data-processing
pipeline. The monad acts as a
framework, as it's a reusable behavior
that decides the order in which the
specific monadic functions in the
pipeline are called, and manages all
the undercover work required by the
computation.[3] The bind and return
operators interleaved in the pipeline
will be executed after each monadic
function returns control, and will
take care of the particular aspects
handled by the monad.
I believe it explains it very well.
I'll try to make the shortest definition I can manage using OOP terms:
A generic class CMonadic<T> is a monad if it defines at least the following methods:
class CMonadic<T> {
static CMonadic<T> create(T t); // a.k.a., "return" in Haskell
public CMonadic<U> flatMap<U>(Func<T, CMonadic<U>> f); // a.k.a. "bind" in Haskell
}
and if the following laws apply for all types T and their possible values t
left identity:
CMonadic<T>.create(t).flatMap(f) == f(t)
right identity
instance.flatMap(CMonadic<T>.create) == instance
associativity:
instance.flatMap(f).flatMap(g) == instance.flatMap(t => f(t).flatMap(g))
Examples:
A List monad may have:
List<int>.create(1) --> [1]
And flatMap on the list [1,2,3] could work like so:
intList.flatMap(x => List<int>.makeFromTwoItems(x, x*10)) --> [1,10,2,20,3,30]
Iterables and Observables can also be made monadic, as well as Promises and Tasks.
Commentary:
Monads are not that complicated. The flatMap function is a lot like the more commonly encountered map. It receives a function argument (also known as delegate), which it may call (immediately or later, zero or more times) with a value coming from the generic class. It expects that passed function to also wrap its return value in the same kind of generic class. To help with that, it provides create, a constructor that can create an instance of that generic class from a value. The return result of flatMap is also a generic class of the same type, often packing the same values that were contained in the return results of one or more applications of flatMap to the previously contained values. This allows you to chain flatMap as much as you want:
intList.flatMap(x => List<int>.makeFromTwo(x, x*10))
.flatMap(x => x % 3 == 0
? List<string>.create("x = " + x.toString())
: List<string>.empty())
It just so happens that this kind of generic class is useful as a base model for a huge number of things. This (together with the category theory jargonisms) is the reason why Monads seem so hard to understand or explain. They're a very abstract thing and only become obviously useful once they're specialized.
For example, you can model exceptions using monadic containers. Each container will either contain the result of the operation or the error that has occured. The next function (delegate) in the chain of flatMap callbacks will only be called if the previous one packed a value in the container. Otherwise if an error was packed, the error will continue to propagate through the chained containers until a container is found that has an error handler function attached via a method called .orElse() (such a method would be an allowed extension)
Notes: Functional languages allow you to write functions that can operate on any kind of a monadic generic class. For this to work, one would have to write a generic interface for monads. I don't know if its possible to write such an interface in C#, but as far as I know it isn't:
interface IMonad<T> {
static IMonad<T> create(T t); // not allowed
public IMonad<U> flatMap<U>(Func<T, IMonad<U>> f); // not specific enough,
// because the function must return the same kind of monad, not just any monad
}
Whether a monad has a "natural" interpretation in OO depends on the monad. In a language like Java, you can translate the maybe monad to the language of checking for null pointers, so that computations that fail (i.e., produce Nothing in Haskell) emit null pointers as results. You can translate the state monad into the language generated by creating a mutable variable and methods to change its state.
A monad is a monoid in the category of endofunctors.
The information that sentence puts together is very deep. And you work in a monad with any imperative language. A monad is a "sequenced" domain specific language. It satisfies certain interesting properties, which taken together make a monad a mathematical model of "imperative programming". Haskell makes it easy to define small (or large) imperative languages, which can be combined in a variety of ways.
As an OO programmer, you use your language's class hierarchy to organize the kinds of functions or procedures that can be called in a context, what you call an object. A monad is also an abstraction on this idea, insofar as different monads can be combined in arbitrary ways, effectively "importing" all of the sub-monad's methods into the scope.
Architecturally, one then uses type signatures to explicitly express which contexts may be used for computing a value.
One can use monad transformers for this purpose, and there is a high quality collection of all of the "standard" monads:
Lists (non-deterministic computations, by treating a list as a domain)
Maybe (computations that can fail, but for which reporting is unimportant)
Error (computations that can fail and require exception handling
Reader (computations that can be represented by compositions of plain Haskell functions)
Writer (computations with sequential "rendering"/"logging" (to strings, html etc)
Cont (continuations)
IO (computations that depend on the underlying computer system)
State (computations whose context contains a modifiable value)
with corresponding monad transformers and type classes. Type classes allow a complementary approach to combining monads by unifying their interfaces, so that concrete monads can implement a standard interface for the monad "kind". For example, the module Control.Monad.State contains a class MonadState s m, and (State s) is an instance of the form
instance MonadState s (State s) where
put = ...
get = ...
The long story is that a monad is a functor which attaches "context" to a value, which has a way to inject a value into the monad, and which has a way to evaluate values with respect to the context attached to it, at least in a restricted way.
So:
return :: a -> m a
is a function which injects a value of type a into a monad "action" of type m a.
(>>=) :: m a -> (a -> m b) -> m b
is a function which takes a monad action, evaluates its result, and applies a function to the result. The neat thing about (>>=) is that the result is in the same monad. In other words, in m >>= f, (>>=) pulls the result out of m, and binds it to f, so that the result is in the monad. (Alternatively, we can say that (>>=) pulls f into m and applies it to the result.) As a consequence, if we have f :: a -> m b, and g :: b -> m c, we can "sequence" actions:
m >>= f >>= g
Or, using "do notation"
do x <- m
y <- f x
g y
The type for (>>) might be illuminating. It is
(>>) :: m a -> m b -> m b
It corresponds to the (;) operator in procedural languages like C. It allows do notation like:
m = do x <- someQuery
someAction x
theNextAction
andSoOn
In mathematical and philosopical logic, we have frames and models, which are "naturally" modelled with monadism. An interpretation is a function which looks into the model's domain and computes the truth value (or generalizations) of a proposition (or formula, under generalizations). In a modal logic for necessity, we might say that a proposition is necessary if it is true in "every possible world" -- if it is true with respect to every admissible domain. This means that a model in a language for a proposition can be reified as a model whose domain consists of collection of distinct models (one corresponding to each possible world). Every monad has a method named "join" which flattens layers, which implies that every monad action whose result is a monad action can be embedded in the monad.
join :: m (m a) -> m a
More importantly, it means that the monad is closed under the "layer stacking" operation. This is how monad transformers work: they combine monads by providing "join-like" methods for types like
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
so that we can transform an action in (MaybeT m) into an action in m, effectively collapsing layers. In this case, runMaybeT :: MaybeT m a -> m (Maybe a) is our join-like method. (MaybeT m) is a monad, and MaybeT :: m (Maybe a) -> MaybeT m a is effectively a constructor for a new type of monad action in m.
A free monad for a functor is the monad generated by stacking f, with the implication that every sequence of constructors for f is an element of the free monad (or, more exactly, something with the same shape as the tree of sequences of constructors for f). Free monads are a useful technique for constructing flexible monads with a minimal amount of boiler-plate. In a Haskell program, I might use free monads to define simple monads for "high level system programming" to help maintain type safety (I'm just using types and their declarations. Implementations are straight-forward with the use of combinators):
data RandomF r a = GetRandom (r -> a) deriving Functor
type Random r a = Free (RandomF r) a
type RandomT m a = Random (m a) (m a) -- model randomness in a monad by computing random monad elements.
getRandom :: Random r r
runRandomIO :: Random r a -> IO a (use some kind of IO-based backend to run)
runRandomIO' :: Random r a -> IO a (use some other kind of IO-based backend)
runRandomList :: Random r a -> [a] (some kind of list-based backend (for pseudo-randoms))
Monadism is the underlying architecture for what you might call the "interpreter" or "command" pattern, abstracted to its clearest form, since every monadic computation must be "run", at least trivially. (The runtime system runs the IO monad for us, and is the entry point to any Haskell program. IO "drives" the rest of the computations, by running IO actions in order).
The type for join is also where we get the statement that a monad is a monoid in the category of endofunctors. Join is typically more important for theoretical purposes, in virtue of its type. But understanding the type means understanding monads. Join and monad transformer's join-like types are effectively compositions of endofunctors, in the sense of function composition. To put it in a Haskell-like pseudo-language,
Foo :: m (m a) <-> (m . m) a
Quick explanation:
Monads (in functional programming) are functions with context-dependent behaviour.
The context is passed as argument, being returned from a previous call of that monad. It makes it look like the same argument produces a different return value on subsequent calls.
Equivalent:
Monads are functions whose actual arguments depend on past calls of a call chain.
Typical example: Stateful functions.
FAQ
Wait, what do you mean with "behaviour"?
Behaviour means the return value and side effects that you get for specific inputs.
But what's so special about them?
In procedural semantics: nothing. But they are modelled solely using pure functions. It's because pure functional programming languages like Haskell only use pure functions which are not stateful by themselves.
But then, where comes the state from?
The statefulness comes from the sequentialness of the function-call execution. It allows nested functions to drag certain arguments around through multiple function calls. This simulates state. The monad is just a software pattern to hide these additional arguments behind return values of shiny functions, often called return and bind.
Why is input/output a monad in Haskell?
Because displayed text is a state in your operating system. If you read or write the same text multiple times, the state of the operating system will not be equal after each call. Instead, your output device will show 3 times the text output. For proper reactions to the OS, Haskell needs to model the OS state for itself as a monad.
Technically, you don't need the monad definition. Purely functional languages can use the idea of "uniqueness type"s for the same purpose.
Do monads exist in non-functional languages?
Yes, basically an interpreter is a complex monad, interpreting each instruction and mapping it to a new state in the OS.
Long explanation:
A monad (in functional programming) is a pure-functional software pattern. A monad is an automatically maintained environment (an object) in which a chain of pure function calls can be executed. The function results modify or interact with that environment.
In other words, a monad is a "function-repeater" or "function-chainer" which is chaining and evaluating argument values within an automatically maintained environment. Often the chained argument values are "update-functions" but actually could be any objects (with methods, or container elements which make up a container). The monad is the "glue code" executed before and after each evaluated argument. This glue code function "bind" is supposed to integrate each argument's environment output into the original environment.
Thus, the monad concatenates the results of all arguments in a way that is implementation-specific to a particular monad. Whether or how control and data flows between the arguments is also implementation-specific.
This intertwinned execution allows to model complete imperative control flow (as in a GOTO-program) or parallel execution with only pure functions, but also side effects, temporary state or exception handling between the function calls even though the applied functions don't know about the external environment.
EDIT: Note that monads can evaluate the function chain in any kind of control flow graph, even non-deterministic NFA-like manner because the remaining chain is evaluated lazily and can be evaluated multiple times at each point of the chain which allows for backtracking in the chain.
The reason to use the monad concept is the pure-functional paradigm which needs a tool to simulate typically impurely modelled behaviour in a pure way, not because they do something special.
Monads for OOP people
In OOP a monad is a typical object with
a constructor often called return that turns a value into an initial instance of the environment
a chainable argument application method often called bind which maintains the object's state with the returned environment of a function passed as argument.
Some people also mention a third function join which is part of bind. Because the "argument-functions" are evalutated within the environment, their result is nested in the environment itself. join is the last step to "un-nest" the result (flattens the environment) to replace the environment with a new one.
A monad can implement the Builder pattern but allows for much more general use.
Example (Python)
I think the most intuitive example for monads are relational operators from Python:
result = 0 <= x == y < 3
You see that it is a monad because it has to carry along some boolean state which is not known by individual relational operator calls.
If you think about how to implement it without short-circuiting behaviour on low level then you exactly will get a monad implementation:
# result = ret(0)
result = (0, true)
# result = result.bind(lambda v: (x, v <= x))
result[1] = result[1] and result[0] <= x
result[0] = x
# result = result.bind(lambda v: (y, v == y))
result[1] = result[1] and result[0] == y
result[0] = y
# result = result.bind(lambda v: (3, v < 3))
result[1] = result[1] and result[0] < 3
result[0] = 3
result = result[1] # not explicit part of a monad
A real monad would compute every argument at most once.
Now think away the "result" variable and you get this chain:
ret(0) .bind (lambda v: v <= x) .bind (lambda v: v == y) .bind (lambda v: v < 3)
Monads in typical usage are the functional equivalent of procedural programming's exception handling mechanisms.
In modern procedural languages, you put an exception handler around a sequence of statements, any of which may throw an exception. If any of the statements throws an exception, normal execution of the sequence of statements halts and transfers to an exception handler.
Functional programming languages, however, philosophically avoid exception handling features due to the "goto" like nature of them. The functional programming perspective is that functions should not have "side-effects" like exceptions that disrupt program flow.
In reality, side-effects cannot be ruled out in the real world due primarily to I/O. Monads in functional programming are used to handle this by taking a set of chained function calls (any of which might produce an unexpected result) and turning any unexpected result into encapsulated data that can still flow safely through the remaining function calls.
The flow of control is preserved but the unexpected event is safely encapsulated and handled.
In OO terms, a monad is a fluent container.
The minimum requirement is a definition of class <A> Something that supports a constructor Something(A a) and at least one method Something<B> flatMap(Function<A, Something<B>>)
Arguably, it also counts if your monad class has any methods with signature Something<B> work() which preserves the class's rules -- the compiler bakes in flatMap at compile time.
Why is a monad useful? Because it is a container that allows chain-able operations that preserve semantics. For example, Optional<?> preserves the semantics of isPresent for Optional<String>, Optional<Integer>, Optional<MyClass>, etc.
As a rough example,
Something<Integer> i = new Something("a")
.flatMap(doOneThing)
.flatMap(doAnother)
.flatMap(toInt)
Note we start with a string and end with an integer. Pretty cool.
In OO, it might take a little hand-waving, but any method on Something that returns another subclass of Something meets the criterion of a container function that returns a container of the original type.
That's how you preserve semantics -- i.e. the container's meaning and operations don't change, they just wrap and enhance the object inside the container.
I am sharing my understanding of Monads, which may not be theoretically perfect. Monads are about Context propagation. Monad is, you define some context for some data (or data type(s)), and then define how that context will be carried with the data throughout its processing pipeline. And defining context propagation is mostly about defining how to merge multiple contexts (of same type). Using Monads also means ensuring these contexts are not accidentally stripped off from the data. On the other hand, other context-less data can be brought into a new or existing context. Then this simple concept can be used to ensure compile time correctness of a program.
A monad is an array of functions
(Pst: an array of functions is just a computation).
Actually, instead of a true array (one function in one cell array) you have those functions chained by another function >>=. The >>= allows to adapt the results from function i to feed function i+1, perform calculations between them
or, even, not to call function i+1.
The types used here are "types with context". This is, a value with a "tag".
The functions being chained must take a "naked value" and return a tagged result.
One of the duties of >>= is to extract a naked value out of its context.
There is also the function "return", that takes a naked value and puts it with a tag.
An example with Maybe. Let's use it to store a simple integer on which make calculations.
-- a * b
multiply :: Int -> Int -> Maybe Int
multiply a b = return (a*b)
-- divideBy 5 100 = 100 / 5
divideBy :: Int -> Int -> Maybe Int
divideBy 0 _ = Nothing -- dividing by 0 gives NOTHING
divideBy denom num = return (quot num denom) -- quotient of num / denom
-- tagged value
val1 = Just 160
-- array of functions feeded with val1
array1 = val1 >>= divideBy 2 >>= multiply 3 >>= divideBy 4 >>= multiply 3
-- array of funcionts created with the do notation
-- equals array1 but for the feeded val1
array2 :: Int -> Maybe Int
array2 n = do
v <- divideBy 2 n
v <- multiply 3 v
v <- divideBy 4 v
v <- multiply 3 v
return v
-- array of functions,
-- the first >>= performs 160 / 0, returning Nothing
-- the second >>= has to perform Nothing >>= multiply 3 ....
-- and simply returns Nothing without calling multiply 3 ....
array3 = val1 >>= divideBy 0 >>= multiply 3 >>= divideBy 4 >>= multiply 3
main = do
print array1
print (array2 160)
print array3
Just to show that monads are array of functions with helper operations, consider
the equivalent to the above example, just using a real array of functions
type MyMonad = [Int -> Maybe Int] -- my monad as a real array of functions
myArray1 = [divideBy 2, multiply 3, divideBy 4, multiply 3]
-- function for the machinery of executing each function i with the result provided by function i-1
runMyMonad :: Maybe Int -> MyMonad -> Maybe Int
runMyMonad val [] = val
runMyMonad Nothing _ = Nothing
runMyMonad (Just val) (f:fs) = runMyMonad (f val) fs
And it would be used like this:
print (runMyMonad (Just 160) myArray1)
If you've ever used Powershell, the patterns Eric described should sound familiar. Powershell cmdlets are monads; functional composition is represented by a pipeline.
Jeffrey Snover's interview with Erik Meijer goes into more detail.
From a practical point of view (summarizing what has been said in many previous answers and related articles), it seems to me that one of the fundamental "purposes" (or usefulness) of the monad is to leverage the dependencies implicit in recursive method invocations aka function composition (i.e. when f1 calls f2 calls f3, f3 needs to be evaluated before f2 before f1) to represent sequential composition in a natural way, especially in the context of a lazy evaluation model (that is, sequential composition as a plain sequence, e.g. "f3(); f2(); f1();" in C - the trick is especially obvious if you think of a case where f3, f2 and f1 actually return nothing [their chaining as f1(f2(f3)) is artificial, purely intended to create sequence]).
This is especially relevant when side-effects are involved, i.e. when some state is altered (if f1, f2, f3 had no side-effects, it wouldn't matter in what order they're evaluated; which is a great property of pure functional languages, to be able to parallelize those computations for example). The more pure functions, the better.
I think from that narrow point of view, monads could be seen as syntactic sugar for languages that favor lazy evaluation (that evaluate things only when absolutely necessary, following an order that does not rely on the presentation of the code), and that have no other means of representing sequential composition. The net result is that sections of code that are "impure" (i.e. that do have side-effects) can be presented naturally, in an imperative manner, yet are cleanly separated from pure functions (with no side-effects), which can be evaluated lazily.
This is only one aspect though, as warned here.
A simple Monads explanation with a Marvel's case study is here.
Monads are abstractions used to sequence dependent functions that are effectful. Effectful here means they return a type in form F[A] for example Option[A] where Option is F, called type constructor. Let's see this in 2 simple steps
Below Function composition is transitive. So to go from A to C I can compose A => B and B => C.
A => C = A => B andThen B => C
However, if the function returns an effect type like Option[A] i.e. A => F[B] the composition doesn't work as to go to B we need A => B but we have A => F[B].
We need a special operator, "bind" that knows how to fuse these functions that return F[A].
A => F[C] = A => F[B] bind B => F[C]
The "bind" function is defined for the specific F.
There is also "return", of type A => F[A] for any A, defined for that specific F also. To be a Monad, F must have these two functions defined for it.
Thus we can construct an effectful function A => F[B] from any pure function A => B,
A => F[B] = A => B andThen return
but a given F can also define its own opaque "built-in" special functions of such types that a user can't define themself (in a pure language), like
"random" (Range => Random[Int])
"print" (String => IO[ () ])
"try ... catch", etc.
The simplest explanation I can think of is that monads are a way of composing functions with embelished results (aka Kleisli composition). An "embelished" function has the signature a -> (b, smth) where a and b are types (think Int, Bool) that might be different from each other, but not necessarily - and smth is the "context" or the "embelishment".
This type of functions can also be written a -> m b where m is equivalent to the "embelishment" smth. So these are functions that return values in context (think functions that log their actions, where smth is the logging message; or functions that perform input\output and their results depends on the result of the IO action).
A monad is an interface ("typeclass") that makes the implementer tell it how to compose such functions. The implementer needs to define a composition function (a -> m b) -> (b -> m c) -> (a -> m c) for any type m that wants to implement the interface (this is the Kleisli composition).
So, if we say that we have a tuple type (Int, String) representing results of computations on Ints that also log their actions, with (_, String) being the "embelishment" - the log of the action - and two functions increment :: Int -> (Int, String) and twoTimes :: Int -> (Int, String) we want to obtain a function incrementThenDouble :: Int -> (Int, String) which is the composition of the two functions that also takes into account the logs.
On the given example, a monad implementation of the two functions applies to integer value 2 incrementThenDouble 2 (which is equal to twoTimes (increment 2)) would return (6, " Adding 1. Doubling 3.") for intermediary results increment 2 equal to (3, " Adding 1.") and twoTimes 3 equal to (6, " Doubling 3.")
From this Kleisli composition function one can derive the usual monadic functions.
See my answer to "What is a monad?"
It begins with a motivating example, works through the example, derives an example of a monad, and formally defines "monad".
It assumes no knowledge of functional programming and it uses pseudocode with function(argument) := expression syntax with the simplest possible expressions.
This C++ program is an implementation of the pseudocode monad. (For reference: M is the type constructor, feed is the "bind" operation, and wrap is the "return" operation.)
#include <iostream>
#include <string>
template <class A> class M
{
public:
A val;
std::string messages;
};
template <class A, class B>
M<B> feed(M<B> (*f)(A), M<A> x)
{
M<B> m = f(x.val);
m.messages = x.messages + m.messages;
return m;
}
template <class A>
M<A> wrap(A x)
{
M<A> m;
m.val = x;
m.messages = "";
return m;
}
class T {};
class U {};
class V {};
M<U> g(V x)
{
M<U> m;
m.messages = "called g.\n";
return m;
}
M<T> f(U x)
{
M<T> m;
m.messages = "called f.\n";
return m;
}
int main()
{
V x;
M<T> m = feed(f, feed(g, wrap(x)));
std::cout << m.messages;
}
optional/maybe is the most fundamental monadic type
Monads are about function composition. If you have functions f:optional<A>->optional<B>, g:optional<B>->optional<C>,h:optional<C>->optional<D>. Then you could compose them
optional<A> opt;
h(g(f(opt)));
The benefit of monad types, is that you can instead compose f:A->optional<B>, g:B->optional<C>,h:C->optional<D>. They can do this because the monadic interface provides the bind operator
auto optional<A>::bind(A->optional<B>)->optional<B>
and the composition could be written
optional<A> opt
opt.bind(f)
.bind(g)
.bind(h)
The benefit of monads is that we no longer have to handle the logic of if(!opt) return nullopt; in each of f,g,h because this logic is moved into the bind operator.
ranges/lists/iterables are the second most fundamental monad type.
The monadic feature of ranges is we can transform then flatten i.e. Starting with a sentance enncoded as a range of integers [36, 98]
we can transform to [['m','a','c','h','i','n','e',' '], ['l','e','a','r','n','i','n','g', '.']]
and then flatten ['m','a','c','h','i','n','e', ' ', 'l','e','a','r','n','i','n','g','.']
Instead of writing this code
vector<string> lookup_table;
auto stringify(vector<unsigned> rng) -> vector<char>
{
vector<char> result;
for(unsigned key : rng)
for(char ch : lookup_table[key])
result.push_back(ch);
result.push_back(' ')
result.push_back('.')
return result
}
we could write write this
auto f(unsigned key) -> vector<char>
{
vector<char> result;
for(ch : lookup_table[key])
result.push_back(ch);
return result
}
auto stringify(vector<unsigned> rng) -> vector<char>
{
return rng.bind(f);
}
The monad pushes the for loop for(unsigned key : rng) up into the bind function, allowing for code that is easier to reason about, theoretically. Pythagorean triples can be generated in range-v3 with nested binds (rather than chained binds as we saw with optional)
auto triples =
for_each(ints(1), [](int z) {
return for_each(ints(1, z), [=](int x) {
return for_each(ints(x, z), [=](int y) {
return yield_if(x*x + y*y == z*z, std::make_tuple(x, y, z));
});
});
});