What is Predicate Dispatch - oop

I have seen much talk about predicate dispatch in Clojure lately and wonder if there is something to this thing. In other words, what is predicate dispatch and how does it differ from generic functions, OOP polymorphism, and patterns?
Thank you

Predicate dispatch subsumes generic functions, OOP polymorphism, pattern matching, and more. A good overview is Predicate dispatching: A unified theory of dispatch by Michael Ernst, Craig Kaplan, and Craig Chambers. From its abstract:
Predicate dispatching generalizes previous method dispatch mechanisms by permitting arbitrary predicates to control method applicability and by using logical implication between predicates as the overriding relationship. The method selected to handle a message send can depend not just on the classes of the arguments, as in ordinary object-oriented dispatch, but also on the classes of subcomponents, on an argument's state, and on relationships between objects.

Edited: Clojure multimethods are not predicate dispatch.
In traditional object-oriented programming, polymorphism means that you can have multiple implementations of a method, and the exact implementation that gets called is determined by the type of the object on which you called the method. This is type dispatch.
Clojure multimethods extend this so that an arbitrary function can decide which implementation gets called. In the Clojure form (defmulti name f), the function f is the dispatch function.
The dispatch function could be class, in which case you're back to type dispatch. But that function could be anything else: computing a dispatch value, looking up stuff in a database, even calling out to a web service.
True predicate dispatch would potentially allow each method implementation to specify one or more dispatch functions (predicates) to decide when that method applies. This is more general than multimethods but more complicated to implement. Clojure does not support it.
Generic function is a term from other Lisps. Common Lisp, for example, provides generic functions which can dispatch on type plus a restricted set of other functions.

Predicate dispatch is a way of providing different responses to a function call, based on the number, "shape" and values of the arguments to the function. Clojure functions already dispatch to different bodies of code, depending on the number of arguments passed to the function:
(defn my-func
([a] (* a a))
([a b] (* a b)))
Clojure multimethods add to this the ability to dispatch to different methods—perhaps defined in different namespaces—based on the return value of a dispatch function that examines the arguments (which can include their number, class, and value) and identifies which method to all. As noted in the footnotes to Stuart Sierra's answer, the creator of the multimethod gets to define the dispatch function, and it can't ordinarily be modified. Also, the programmer has to hand-design an ultra-complex dispatch function for a function that executes one thing for an integer of value 0, and another for a positive integer; or one thing for a list of one or more items, and another for an empty list.
Predicate dispatch would (perhaps) provide a syntax that generated this complex dispatch function itself. For example, a factorial function could be defined this way
(defmatch fact [0] 1)
(defmatch fact [n] (* n (fact (dec n))))
The former code responds to a call to
(fact 0)
the latter code to a call with a single argument of any other value. This would (behind the scenes) define a multimethod with a dispatch function that distinguishes the zero from other values.
But later I could specify that I want a factorial for a map (perhaps) by coding
(defmatch fact [x {}] (fact (:value x)))
and the code could (in theory) intercept calls passing a map to fact, delegating other calls to the original dispatch function...all behind the scenes.

To contrast predicate dispatch with multimethods, it's a bit like if you defined a multimethod without specifying a dispatch fn:
(defmulti my-method)
and, when you want to extend it you don't specify a dispatch value (since there's no disaptch fn to produce it) but a predicate:
(defmethod my-method (fn [a b] (and (vector? a) (vector? b)))
[a b]
(do something))
Simple and powerful.
The problem is that predicates may overlap, plus you don't want to check all possible predicates at each call. That's why implementations restricts the expressiveness of predicates (to something similiare to pattern cases) so as to be able to be smart about them (detect ambiguities, create a fast decision tree etc.).

Related

Why are class slots specified with keywords but accessed with symbols?

I have recently encountered a confusing dichotomy regarding structures in Lisp.
When creating a structure with (defstruct), we specify the slots by keyword (:slotname). But when accessing it, we use local symbols ('slotname).
Why? This makes no sense to me.
Also, doesn't this pollute the keyword package every time you declare a structure?
If I try to access the slots by keyword, I get confusing errors like:
When attempting to read the slot's value (slot-value), the slot :BALANCE is
missing from the object #S(ACCOUNT :BALANCE 1000 :CUSTOMER-NAME "John Doe").
I don't understand this message. It seems to be telling me that something right under my nose doesn't exist.
I have tried declaring the structure using local symbols; and also with unbound keywords (#:balance) and these don't work.
DEFSTRUCT is designed in the language standard in this way:
slot-names are not exposed
there is no specified way to get a list of slot-names of a structure class
there is no specified way to access a slot via a slot-name
thus at runtime there might be no slot-names
access to slots is optimized with accessor functions: static structure layout, inlined accessor functions, ...
Also explicitly:
slot-names are not allowed to be duplicate under string=. Thus slots foo::a and bar::a in the same structure class are not allowed
the effects of redefining a structure is undefined
The goal of structures is to provide fast record-like objects without costly features like redefinition, multiple inheritance, etc.
Thus using SLOT-VALUE to access structure slots is an extension of implementations, not a part of the defined language. SLOT-VALUE was introduced when CLOS was added to Common Lisp. Several implementations provide a way to access a structure slot via SLOT-VALUE. This then also requires that the implementation has kept track of slot names of that structure.
SLOT-VALUE is simply a newer API function, coming from CLOS for CLOS. Structures are an older feature, which was defined already in the first version of Common Lisp defined by the book CLtL1.
You used make-instance to create a class instance and then you are showing a struct, I am confused.
structs automatically build their accessor functions. You create it with make-account. Then you'd use account-balance instead of slot-value.
I don't know what is the expected behavior to use make-instance with a struct. While it seemed to work on my SBCL, you are not using structs right.
(defstruct account
(balance))
(make-account :balance 100)
#S(ACCOUNT :BALANCE 100)
(account-balance *)
100
With classes, you are free to name your accessor functions as you want.
;;(pseudocode)
(defclass bank-account ()
((balance :initform nil ;; otherwise it's unbound
:initarg :balance ;; to use with make-instance :balance
:accessor balance ;; or account-balance, as you wish.
)))
(make-instance 'bank-account :balance 200)
#<BANK-ACCOUNT {1009302A33}>
(balance *)
200
https://lispcookbook.github.io/cl-cookbook/data-structures.html#structures
http://www.lispworks.com/documentation/HyperSpec/Body/m_defstr.htm
the slot :BALANCE is missing from the object #S(ACCOUNT :BALANCE 1000 :CUSTOMER-NAME "John Doe").
The slot name is actually balance and the representation uses the generated initargs. With the class object, the error message might be less confusing:
When attempting to read the slot's value (slot-value), the slot :BALANCE is missing from the object #<BANK-ACCOUNT {1009302A33}>.
First of all, see Rainer's excellent answer on structures. In summary:
Objects defined with defstruct have named accessor functions, not named slots. Further the field names of these objects which are mentioned in the defstruct form must be distinct as strings, and so keywords are completely appropriate for use in constructor functions. Any use of slot-value on such objects is implementation-dependent, and indeed whether or not named slots exist at all is entirely implementation-dependent.
You generally want keyword arguments for the constructors for the reasons you want keyword arguments elsewhere: you don't want to have to painfully provide 49 optional arguments so you can specify the 50th. So it's reasonable that the default thing defstruct does is that. But you can completely override this if you want to, using a BOA constructor, which defstruct allows you to do. You can even have no constructor at all! As an example here is a rather perverse structure constructor: it does use keyword arguments, but not the ones you might naively expect.
(defstruct (foo
(:constructor
make-foo (&key ((:y x) 1) ((:x y) 2))))
y
x)
So the real question revolves around classes defined with defclass, which usually do have named slots and where slot-value does work.
So in this case there are really two parts to the annswer.
Firstly, as before, keyword arguments are really useful for constructors because no-one wants to have to remember 932 optional argument defaults. But defclass provides complete control over the mapping between keyword arguments and the slots they initialise, or whether they initialise slots at all or instead are passed to some initialize-instance method. You can just do anything you want here.
Secondly, you really want slot names for objects of classes defined with defclass to be symbols which live in packages. You definitely do not want this to happen:
(in-package "MY-PACKAGE")
(use-package "SOMEONE-ELSES-PACKAGE")
(defclass my-class (someone-elses-class)
((internal-implementation-slot ...)))
only to discover that you have just modified the definition of the someone-elses-package::internal-implementation-slot slot in someone-elses-class. That would be bad. So slot names are symbols which live in packages and the normal namespace control around packages works for them too: my-package::internal-implementation-slot and someone-elses-package::internal-implementation-slot are not (usually) the same thing.
Additionally, the whole keyword-symbol-argument / non-keyword-symbol-variable thing is, well, let's just say well-established:
(defun foo (&key (x 1))
... x ...)
Finally note, of course, that keyword arguments don't actually have to be keywords: it's generally convenient that they are because you need quotes otherwise, but:
(defclass silly ()
((foo :initarg foo
:accessor silly-foo)
(bar :initarg bar
:accessor silly-bar)))
And now
> (silly-foo (make-instance 'silly 'bar 3 'foo 9))
9

How can I attach a type tag to a closure in Scheme?

How can I attach an arbitrary tag to a closure in Scheme?
Here are a couple things I'd like to use this for:
(1) To mark closures that provide an interface to produce a string for what they represent, like what #kud0h asked for here. A general ->string procedure could include code something like this:
(display (if (stringable? x)
(x 'string)
x)
str-port)
(2) More generally, to determine if a closure is an "object" that obeys the rules of a general object interface, or maybe to tell the class of an object (something like what #KPatnode was asking about here).
I can't query a procedure to see if it supports a certain interface by calling it, because if it doesn't support a known interface, calling the procedure will produce unpredictable results, most likely a run-time error.
Chez Scheme has putprop and getprop procedures that allow you to add keys and values to symbols. However, closures can be anonymous, or bound to different symbols, so I'd prefer to attach a calling-convention tag to the closure itself, not a symbol that it's bound to.
The only idea I have right now is to maintain a global hash table of all "stringable" or "object" closures in the system. That seems a little clunky. Is there a simpler, more elegant, or more efficient way?
Racket has applicable structures: you can give a structure type an apply hook to be called if an instance is used as a function.
If you want a more portable solution, you can use a hash table to associate your data with certain procedures. Unless your Scheme provides weak hashtables, though, keep in mind that the hashtable will prevent the procedures from being garbage-collected.
I think you might, instead of tagging procedures per se, want to look at Racket's object system, which has a concept of interfaces. It sounds quite similar to what you're after.
You could go extreme and redefine lambda syntax. Something like this (but untested by me):
(define *properties* '()) ;; example only
(define-syntax lambda
(let-syntax ((sys-lambda
(syntax-rules ()
((_ args body ...)
(lambda args body ...)))))
(syntax-rules ()
((_ args body ...)
(let ((func (sys-lambda args body ...)))
(set! *properties*
(cons (cons func '(NO-PROPERTIES))
*properties*))
func)))))

Is currying the same as overloading?

Is currying for functional programming the same as overloading for OO programming? If not, why? (with examples if possible)
Tks
Currying is not specific to functional programming, and overloading is not specific to object-oriented programming.
"Currying" is the use of functions to which you can pass fewer arguments than required to obtain a function of the remaining arguments. i.e. if we have a function plus which takes two integer arguments and returns their sum, then we can pass the single argument 1 to plus and the result is a function for adding 1 to things.
In Haskellish syntax (with function application by adjacency):
plusOne = plusCurried 1
three = plusOne 2
four = plusCurried 2 2
five = plusUncurried 2 3
In vaguely Cish syntax (with function application by parentheses):
plusOne = plusCurried(1)
three = plusOne(2)
four = plusCurried(2)(2)
five = plusUncurried(2, 3)
You can see in both of these examples that plusCurried is invoked on only 1 argument, and the result is something that can be bound to a variable and then invoked on another argument. The reason that you're thinking of currying as a functional-programming concept is that it sees the most use in functional languages whose syntax has application by adjacency, because in that syntax currying becomes very natural. The applications of plusCurried and plusUncurried to define four and five in the Haskellish syntax merge to become completely indistinguishable, so you can just have all functions be fully curried always (i.e. have every function be a function of exactly one argument, only some of them will return other functions that can then be applied to more arguments). Whereas in the Cish syntax with application by parenthesised argument lists, the definitions of four and five look completely different, so you need to distinguish between plusCurried and plusUncurried. Also, the imperative languages that led to today's object-oriented languages never had the ability to bind functions to variables or pass them to other functions (this is known as having first-class functions), and without that facility there's nothing you can actually do with a curried-function other than invoke it on all arguments, and so no point in having them. Some of today's OO languages still don't have first-class functions, or only gained them recently.
The term currying also refers to the process of turning a function of multiple arguments into one that takes a single argument and returns another function (which takes a single argument, and may return another function which ...), and "uncurrying" can refer to the process of doing the reverse conversion.
Overloading is an entirely unrelated concept. Overloading a name means giving multiple definitions with different characteristics (argument types, number of arguments, return type, etc), and have the compiler resolve which definition is meant by a given appearance of the name by the context in which it appears.
A fairly obvious example of this is that we could define plus to add integers, but also use the same name plus for adding floating point numbers, and we could potentially use it for concatenating strings, arrays, lists, etc, or to add vectors or matrices. All of these have very different implementations that have nothing to do with each other as far as the language implementation is concerned, but we just happened to give them the same name. The compiler is then responsible for figuring out that plus stringA stringB should call the string plus (and return a string), while plus intX intY should call the integer plus (and return an integer).
Again, there is no inherent reason why this concept is an "OO concept" rather than a functional programming concept. It simply happened that it fit quite naturally in statically typed object-oriented languages that were developed; if you're already resolving which method to call by the object that the method is invoked on, then it's a small stretch to allow more general overloading. Completely ad-hoc overloading (where you do nothing more than define the same name multiple times and trust the compiler to figure it out) doesn't fit as nicely in languages with first-class functions, because when you pass the overloaded name as a function itself you don't have the calling context to help you figure out which definition is intended (and programmers may get confused if what they really wanted was to pass all the overloaded definitions). Haskell developed type classes as a more principled way of using overloading; these effectively do allow you to pass all the overloaded definitions at once, and also allow the type system to express types a bit like "any type for which the functions f and g are defined".
In summary:
currying and overloading are completely unrelated
currying is about applying functions to fewer arguments than they require in order to get a function of the remaining arguments
overloading is about providing multiple definitions for the same name and having the compiler select which definition is used each time the name is used
neither currying nor overloading are specific to either functional programming or object-oriented programming; they each simply happen to be more widespread in historical languages of one kind or another because of the way the languages developed, causing them to be more useful or more obvious in one kind of language
No, they are entirely unrelated and dissimilar.
Overloading is a technique for allowing the same code to be used at different types -- often known in functional programming as polymorphism (of various forms).
A polymorphic function:
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
Here, map is a function that operates on any list. It is polymorphic -- it works just as well with a list of Int as a list of trees of hashtables. It also is higher-order, in that it is a function that takes a function as an argument.
Currying is the transformation of a function that takes a structure of n arguments, into a chain of functions each taking one argument.
In curried languages, you can apply any function to some of its arguments, yielding a function that takes the rest of the arguments. The partially-applied function is a closure.
And you can transform a curried function into an uncurried one (and vice-versa) by applying the transformation invented by Curry and Schonfinkel.
curry :: ((a, b) -> c) -> a -> b -> c
-- curry converts an uncurried function to a curried function.
uncurry :: (a -> b -> c) -> (a, b) -> c
-- uncurry converts a curried function to a function on pairs.
Overloading is having multiple functions with the same name, having different parameters.
Currying is where you can take multiple parameters, and selectively set some, so you may just have one variable, for example.
So, if you have a graphing function in 3 dimensions, you may have:
justgraphit(double[] x, double[] y, double[] z), and you want to graph it.
By currying you could have:
var fx = justgraphit(xlist)(y)(z) where you have now set fx so that it now has two variables.
Then, later on, the user picks another axis (date) and you set the y, so now you have:
var fy = fx(ylist)(z)
Then, later you graph the information by just looping over some data and the only variability is the z parameter.
This makes complicated functions simpler as you don't have to keep passing what is largely set variables, so the readability increases.

Difference between State, ST, IORef, and MVar

I am working through Write Yourself a Scheme in 48 Hours (I'm up to about 85hrs) and I've gotten to the part about Adding Variables and Assignments. There is a big conceptual jump in this chapter, and I wish it had been done in two steps with a good refactoring in between rather then jumping at straight to the final solution. Anyway…
I've gotten lost with a number of different classes that seem to serve the same purpose: State, ST, IORef, and MVar. The first three are mentioned in the text, while the last seems to be the favored answer to a lot of StackOverflow questions about the first three. They all seem to carry a state between consecutive invocations.
What are each of these and how do they differ from one another?
In particular these sentences don't make sense:
Instead, we use a feature called state threads, letting Haskell manage the aggregate state for us. This lets us treat mutable variables as we would in any other programming language, using functions to get or set variables.
and
The IORef module lets you use stateful variables within the IO monad.
All this makes the line type ENV = IORef [(String, IORef LispVal)] confusing - why the second IORef? What will break if I'll write type ENV = State [(String, LispVal)] instead?
The State Monad : a model of mutable state
The State monad is a purely functional environment for programs with state, with a simple API:
get
put
Documentation in the mtl package.
The State monad is commonly used when needing state in a single thread of control. It doesn't actually use mutable state in its implementation. Instead, the program is parameterized by the state value (i.e. the state is an additional parameter to all computations). The state only appears to be mutated in a single thread (and cannot be shared between threads).
The ST monad and STRefs
The ST monad is the restricted cousin of the IO monad.
It allows arbitrary mutable state, implemented as actual mutable memory on the machine. The API is made safe in side-effect-free programs, as the rank-2 type parameter prevents values that depend on mutable state from escaping local scope.
It thus allows for controlled mutability in otherwise pure programs.
Commonly used for mutable arrays and other data structures that are mutated, then frozen. It is also very efficient, since the mutable state is "hardware accelerated".
Primary API:
Control.Monad.ST
runST -- start a new memory-effect computation.
And STRefs: pointers to (local) mutable cells.
ST-based arrays (such as vector) are also common.
Think of it as the less dangerous sibling of the IO monad. Or IO, where you can only read and write to memory.
IORef : STRefs in IO
These are STRefs (see above) in the IO monad. They don't have the same safety guarantees as STRefs about locality.
MVars : IORefs with locks
Like STRefs or IORefs, but with a lock attached, for safe concurrent access from multiple threads. IORefs and STRefs are only safe in a multi-threaded setting when using atomicModifyIORef (a compare-and-swap atomic operation). MVars are a more general mechanism for safely sharing mutable state.
Generally, in Haskell, use MVars or TVars (STM-based mutable cells), over STRef or IORef.
Ok, I'll start with IORef. IORef provides a value which is mutable in the IO monad. It's just a reference to some data, and like any reference, there are functions which allow you to change the data it refers to. In Haskell, all of those functions operate in IO. You can think of it like a database, file, or other external data store - you can get and set the data in it, but doing so requires going through IO. The reason IO is necessary at all is because Haskell is pure; the compiler needs a way to know which data the reference points to at any given time (read sigfpe's "You could have invented monads" blogpost).
MVars are basically the same thing as an IORef, except for two very important differences. MVar is a concurrency primitive, so it's designed for access from multiple threads. The second difference is that an MVar is a box which can be full or empty. So where an IORef Int always has an Int (or is bottom), an MVar Int may have an Int or it may be empty. If a thread tries to read a value from an empty MVar, it will block until the MVar gets filled (by another thread). Basically an MVar a is equivalent to an IORef (Maybe a) with extra semantics that are useful for concurrency.
State is a monad which provides mutable state, not necessarily with IO. In fact, it's particularly useful for pure computations. If you have an algorithm that uses state but not IO, a State monad is often an elegant solution.
There is also a monad transformer version of State, StateT. This frequently gets used to hold program configuration data, or "game-world-state" types of state in applications.
ST is something slightly different. The main data structure in ST is the STRef, which is like an IORef but with a different monad. The ST monad uses type system trickery (the "state threads" the docs mention) to ensure that mutable data can't escape the monad; that is, when you run an ST computation you get a pure result. The reason ST is interesting is that it's a primitive monad like IO, allowing computations to perform low-level manipulations on bytearrays and pointers. This means that ST can provide a pure interface while using low-level operations on mutable data, meaning it's very fast. From the perspective of the program, it's as if the ST computation runs in a separate thread with thread-local storage.
Others have done the core things, but to answer the direct question:
All this makes the line type ENV =
IORef [(String, IORef LispVal)]
confusing. Why the second IORef? What
will break if I do type ENV = State
[(String, LispVal)] instead?
Lisp is a functional language with mutable state and lexical scope. Imagine you've closed over a mutable variable. Now you've got a reference to this variable hanging around inside some other function -- say (in haskell-style pseudocode) (printIt, setIt) = let x = 5 in (\ () -> print x, \y -> set x y). You now have two functions -- one prints x, and one sets its value. When you evaluate printIt, you want to lookup the name of x in the initial environment in which printIt was defined, but you want to lookup the value that name is bound to in the environment in which printIt is called (after setIt may have been called any number of times).
There are ways besids the two IORefs to do this, but you certainly need more than the latter type you've proposed, which doesn't allow you to alter the values that names are bound to in a lexically-scoped fashion. Google the "funargs problem" for a whole lot of interesting prehistory.

Monad in plain English? (For the OOP programmer with no FP background)

In terms that an OOP programmer would understand (without any functional programming background), what is a monad?
What problem does it solve and what are the most common places it's used?
Update
To clarify the kind of understanding I was looking for, let’s say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads to the OOP app?
UPDATE: This question was the subject of an immensely long blog series, which you can read at Monads — thanks for the great question!
In terms that an OOP programmer would understand (without any functional programming background), what is a monad?
A monad is an "amplifier" of types that obeys certain rules and which has certain operations provided.
First, what is an "amplifier of types"? By that I mean some system which lets you take a type and turn it into a more special type. For example, in C# consider Nullable<T>. This is an amplifier of types. It lets you take a type, say int, and add a new capability to that type, namely, that now it can be null when it couldn't before.
As a second example, consider IEnumerable<T>. It is an amplifier of types. It lets you take a type, say, string, and add a new capability to that type, namely, that you can now make a sequence of strings out of any number of single strings.
What are the "certain rules"? Briefly, that there is a sensible way for functions on the underlying type to work on the amplified type such that they follow the normal rules of functional composition. For example, if you have a function on integers, say
int M(int x) { return x + N(x * 2); }
then the corresponding function on Nullable<int> can make all the operators and calls in there work together "in the same way" that they did before.
(That is incredibly vague and imprecise; you asked for an explanation that didn't assume anything about knowledge of functional composition.)
What are the "operations"?
There is a "unit" operation (confusingly sometimes called the "return" operation) that takes a value from a plain type and creates the equivalent monadic value. This, in essence, provides a way to take a value of an unamplified type and turn it into a value of the amplified type. It could be implemented as a constructor in an OO language.
There is a "bind" operation that takes a monadic value and a function that can transform the value, and returns a new monadic value. Bind is the key operation that defines the semantics of the monad. It lets us transform operations on the unamplified type into operations on the amplified type, that obeys the rules of functional composition mentioned before.
There is often a way to get the unamplified type back out of the amplified type. Strictly speaking this operation is not required to have a monad. (Though it is necessary if you want to have a comonad. We won't consider those further in this article.)
Again, take Nullable<T> as an example. You can turn an int into a Nullable<int> with the constructor. The C# compiler takes care of most nullable "lifting" for you, but if it didn't, the lifting transformation is straightforward: an operation, say,
int M(int x) { whatever }
is transformed into
Nullable<int> M(Nullable<int> x)
{
if (x == null)
return null;
else
return new Nullable<int>(whatever);
}
And turning a Nullable<int> back into an int is done with the Value property.
It's the function transformation that is the key bit. Notice how the actual semantics of the nullable operation — that an operation on a null propagates the null — is captured in the transformation. We can generalize this.
Suppose you have a function from int to int, like our original M. You can easily make that into a function that takes an int and returns a Nullable<int> because you can just run the result through the nullable constructor. Now suppose you have this higher-order method:
static Nullable<T> Bind<T>(Nullable<T> amplified, Func<T, Nullable<T>> func)
{
if (amplified == null)
return null;
else
return func(amplified.Value);
}
See what you can do with that? Any method that takes an int and returns an int, or takes an int and returns a Nullable<int> can now have the nullable semantics applied to it.
Furthermore: suppose you have two methods
Nullable<int> X(int q) { ... }
Nullable<int> Y(int r) { ... }
and you want to compose them:
Nullable<int> Z(int s) { return X(Y(s)); }
That is, Z is the composition of X and Y. But you cannot do that because X takes an int, and Y returns a Nullable<int>. But since you have the "bind" operation, you can make this work:
Nullable<int> Z(int s) { return Bind(Y(s), X); }
The bind operation on a monad is what makes composition of functions on amplified types work. The "rules" I handwaved about above are that the monad preserves the rules of normal function composition; that composing with identity functions results in the original function, that composition is associative, and so on.
In C#, "Bind" is called "SelectMany". Take a look at how it works on the sequence monad. We need to have two things: turn a value into a sequence and bind operations on sequences. As a bonus, we also have "turn a sequence back into a value". Those operations are:
static IEnumerable<T> MakeSequence<T>(T item)
{
yield return item;
}
// Extract a value
static T First<T>(IEnumerable<T> sequence)
{
// let's just take the first one
foreach(T item in sequence) return item;
throw new Exception("No first item");
}
// "Bind" is called "SelectMany"
static IEnumerable<T> SelectMany<T>(IEnumerable<T> seq, Func<T, IEnumerable<T>> func)
{
foreach(T item in seq)
foreach(T result in func(item))
yield return result;
}
The nullable monad rule was "to combine two functions that produce nullables together, check to see if the inner one results in null; if it does, produce null, if it does not, then call the outer one with the result". That's the desired semantics of nullable.
The sequence monad rule is "to combine two functions that produce sequences together, apply the outer function to every element produced by the inner function, and then concatenate all the resulting sequences together". The fundamental semantics of the monads are captured in the Bind/SelectMany methods; this is the method that tells you what the monad really means.
We can do even better. Suppose you have a sequences of ints, and a method that takes ints and results in sequences of strings. We could generalize the binding operation to allow composition of functions that take and return different amplified types, so long as the inputs of one match the outputs of the other:
static IEnumerable<U> SelectMany<T,U>(IEnumerable<T> seq, Func<T, IEnumerable<U>> func)
{
foreach(T item in seq)
foreach(U result in func(item))
yield return result;
}
So now we can say "amplify this bunch of individual integers into a sequence of integers. Transform this particular integer into a bunch of strings, amplified to a sequence of strings. Now put both operations together: amplify this bunch of integers into the concatenation of all the sequences of strings." Monads allow you to compose your amplifications.
What problem does it solve and what are the most common places it's used?
That's rather like asking "what problems does the singleton pattern solve?", but I'll give it a shot.
Monads are typically used to solve problems like:
I need to make new capabilities for this type and still combine old functions on this type to use the new capabilities.
I need to capture a bunch of operations on types and represent those operations as composable objects, building up larger and larger compositions until I have just the right series of operations represented, and then I need to start getting results out of the thing
I need to represent side-effecting operations cleanly in a language that hates side effects
C# uses monads in its design. As already mentioned, the nullable pattern is highly akin to the "maybe monad". LINQ is entirely built out of monads; the SelectMany method is what does the semantic work of composition of operations. (Erik Meijer is fond of pointing out that every LINQ function could actually be implemented by SelectMany; everything else is just a convenience.)
To clarify the kind of understanding I was looking for, let's say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads into the OOP app?
Most OOP languages do not have a rich enough type system to represent the monad pattern itself directly; you need a type system that supports types that are higher types than generic types. So I wouldn't try to do that. Rather, I would implement generic types that represent each monad, and implement methods that represent the three operations you need: turning a value into an amplified value, (maybe) turning an amplified value into a value, and transforming a function on unamplified values into a function on amplified values.
A good place to start is how we implemented LINQ in C#. Study the SelectMany method; it is the key to understanding how the sequence monad works in C#. It is a very simple method, but very powerful!
Suggested, further reading:
For a more in-depth and theoretically sound explanation of monads in C#, I highly recommend my (Eric Lippert's) colleague Wes Dyer's article on the subject. This article is what explained monads to me when they finally "clicked" for me.
The Marvels of Monads
A good illustration of why you might want a monad around (uses Haskell in it's examples).
You Could Have Invented Monads! (And Maybe You Already Have.) by Dan Piponi
Sort of, "translation" of the previous article to JavaScript.
Translation from Haskell to JavaScript of selected portions of the best introduction to monads I’ve ever read by James Coglan
Why do we need monads?
We want to program only using functions. ("functional programming" after all -FP).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP. How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have composers/linkers g >>= f to help connecting g's output to f's input, so we don't have to change f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness !!!!
I would say the closest OO analogy to monads is the "command pattern".
In the command pattern you wrap an ordinary statement or expression in a command object. The command object expose an execute method which executes the wrapped statement. So statement are turned into first class objects which can passed around and executed at will. Commands can be composed so you can create a program-object by chaining and nesting command-objects.
The commands are executed by a separate object, the invoker. The benefit of using the command pattern (rather than just execute a series of ordinary statements) is that different invokers can apply different logic to how the commands should be executed.
The command pattern could be used to add (or remove) language features which is not supported by the host language. For example, in a hypothetical OO language without exceptions, you could add exception semantics by exposing "try" and "throw" methods to the commands. When a command calls throw, the invoker backtracks through the list (or tree) of commands until the last "try" call. Conversely, you could remove exception semantic from a language (if you believe exceptions are bad) by catching all exceptions thrown by each individual commands, and turning them into error codes which are then passed to the next command.
Even more fancy execution semantics like transactions, non-deterministic execution or continuations can be implemented like this in a language which doesn't support it natively. It is a pretty powerful pattern if you think about it.
Now in reality the command-patterns is not used as a general language feature like this. The overhead of turning each statement into a separate class would lead to an unbearable amount of boilerplate code. But in principle it can be used to solve the same problems as monads are used to solve in fp.
In terms that an OOP programmer would
understand (without any functional
programming background), what is a
monad?
What problem does it solve and what
are the most common places it's used?are the most common places it's used?
In terms of OO programming, a monad is an interface (or more likely a mixin), parameterized by a type, with two methods, return and bind that describe:
How to inject a value to get a
monadic value of that injected value
type;
How to use a function that
makes a monadic value from a
non-monadic one, on a monadic value.
The problem it solves is the same type of problem you'd expect from any interface, namely,
"I have a bunch of different classes that do different things, but seem to do those different things in a way that has an underlying similarity. How can I describe that similarity between them, even if the classes themselves aren't really subtypes of anything closer than 'the Object' class itself?"
More specifically, the Monad "interface" is similar to IEnumerator or IIterator in that it takes a type that itself takes a type. The main "point" of Monad though is being able to connect operations based on the interior type, even to the point of having a new "internal type", while keeping - or even enhancing - the information structure of the main class.
You have a recent presentation "Monadologie -- professional help on type anxiety" by Christopher League (July 12th, 2010), which is quite interesting on topics of continuation and monad.
The video going with this (slideshare) presentation is actually available at vimeo.
The Monad part start around 37 minutes in, on this one hour video, and starts with slide 42 of its 58 slide presentation.
It is presented as "the leading design pattern for functional programming", but the language used in the examples is Scala, which is both OOP and functional.
You can read more on Monad in Scala in the blog post "Monads - Another way to abstract computations in Scala", from Debasish Ghosh (March 27, 2008).
A type constructor M is a monad if it supports these operations:
# the return function
def unit[A] (x: A): M[A]
# called "bind" in Haskell
def flatMap[A,B] (m: M[A]) (f: A => M[B]): M[B]
# Other two can be written in term of the first two:
def map[A,B] (m: M[A]) (f: A => B): M[B] =
flatMap(m){ x => unit(f(x)) }
def andThen[A,B] (ma: M[A]) (mb: M[B]): M[B] =
flatMap(ma){ x => mb }
So for instance (in Scala):
Option is a monad
def unit[A] (x: A): Option[A] = Some(x)
def flatMap[A,B](m:Option[A])(f:A =>Option[B]): Option[B] =
m match {
case None => None
case Some(x) => f(x)
}
List is Monad
def unit[A] (x: A): List[A] = List(x)
def flatMap[A,B](m:List[A])(f:A =>List[B]): List[B] =
m match {
case Nil => Nil
case x::xs => f(x) ::: flatMap(xs)(f)
}
Monad are a big deal in Scala because of convenient syntax built to take advantage of Monad structures:
for comprehension in Scala:
for {
i <- 1 to 4
j <- 1 to i
k <- 1 to j
} yield i*j*k
is translated by the compiler to:
(1 to 4).flatMap { i =>
(1 to i).flatMap { j =>
(1 to j).map { k =>
i*j*k }}}
The key abstraction is the flatMap, which binds the computation through chaining.
Each invocation of flatMap returns the same data structure type (but of different value), that serves as the input to the next command in chain.
In the above snippet, flatMap takes as input a closure (SomeType) => List[AnotherType] and returns a List[AnotherType]. The important point to note is that all flatMaps take the same closure type as input and return the same type as output.
This is what "binds" the computation thread - every item of the sequence in the for-comprehension has to honor this same type constraint.
If you take two operations (that may fail) and pass the result to the third, like:
lookupVenue: String => Option[Venue]
getLoggedInUser: SessionID => Option[User]
reserveTable: (Venue, User) => Option[ConfNo]
but without taking advantage of Monad, you get convoluted OOP-code like:
val user = getLoggedInUser(session)
val confirm =
if(!user.isDefined) None
else lookupVenue(name) match {
case None => None
case Some(venue) =>
val confno = reserveTable(venue, user.get)
if(confno.isDefined)
mailTo(confno.get, user.get)
confno
}
whereas with Monad, you can work with the actual types (Venue, User) like all the operations work, and keep the Option verification stuff hidden, all because of the flatmaps of the for syntax:
val confirm = for {
venue <- lookupVenue(name)
user <- getLoggedInUser(session)
confno <- reserveTable(venue, user)
} yield {
mailTo(confno, user)
confno
}
The yield part will only be executed if all three functions have Some[X]; any None would directly be returned to confirm.
So:
Monads allow ordered computation within Functional Programing, that allows us to model sequencing of actions in a nice structured form, somewhat like a DSL.
And the greatest power comes with the ability to compose monads that serve different purposes, into extensible abstractions within an application.
This sequencing and threading of actions by a monad is done by the language compiler that does the transformation through the magic of closures.
By the way, Monad is not only model of computation used in FP:
Category theory proposes many models of computation. Among them
the Arrow model of computations
the Monad model of computations
the Applicative model of computations
To respect fast readers, I start with precise definition first,
continue with quick more "plain English" explanation, and then move to examples.
Here is a both concise and precise definition slightly reworded:
A monad (in computer science) is formally a map that:
sends every type X of some given programming language to a new type T(X) (called the "type of T-computations with values in X");
equipped with a rule for composing two functions of the form
f:X->T(Y) and g:Y->T(Z) to a function g∘f:X->T(Z);
in a way that is associative in the evident sense and unital with respect to a given unit function called pure_X:X->T(X), to be thought of as taking a value to the pure computation that simply returns that value.
So in simple words, a monad is a rule to pass from any type X to another type T(X), and a rule to pass from two functions f:X->T(Y) and g:Y->T(Z) (that you would like to compose but can't) to a new function h:X->T(Z). Which, however, is not the composition in strict mathematical sense. We are basically "bending" function's composition or re-defining how functions are composed.
Plus, we require the monad's rule of composing to satisfy the "obvious" mathematical axioms:
Associativity: Composing f with g and then with h (from outside) should be the same as composing g with h and then with f (from inside).
Unital property: Composing f with the identity function on either side should yield f.
Again, in simple words, we can't just go crazy re-defining our function composition as we like:
We first need the associativity to be able to compose several functions in a row e.g. f(g(h(k(x))), and not to worry about specifying the order composing function pairs. As the monad rule only prescribes how to compose a pair of functions, without that axiom, we would need to know which pair is composed first and so on. (Note that is different from the commutativity property that f composed with g were the same as g composed with f, which is not required).
And second, we need the unital property, which is simply to say that identities compose trivially the way we expect them. So we can safely refactor functions whenever those identities can be extracted.
So again in brief: A monad is the rule of type extension and composing functions satisfying the two axioms -- associativity and unital property.
In practical terms, you want the monad to be implemented for you by the language, compiler or framework that would take care of composing functions for you. So you can focus on writing your function's logic rather than worrying how their execution is implemented.
That is essentially it, in a nutshell.
Being professional mathematician, I prefer to avoid calling h the "composition" of f and g. Because mathematically, it isn't. Calling it the "composition" incorrectly presumes that h is the true mathematical composition, which it isn't. It is not even uniquely determined by f and g. Instead, it is the result of our monad's new "rule of composing" the functions. Which can be totally different from the actual mathematical composition even if the latter exists!
To make it less dry, let me try to illustrate it by example
that I am annotating with small sections, so you can skip right to the point.
Exception throwing as Monad examples
Suppose we want to compose two functions:
f: x -> 1 / x
g: y -> 2 * y
But f(0) is not defined, so an exception e is thrown. Then how can you define the compositional value g(f(0))? Throw an exception again, of course! Maybe the same e. Maybe a new updated exception e1.
What precisely happens here? First, we need new exception value(s) (different or same). You can call them nothing or null or whatever but the essence remains the same -- they should be new values, e.g. it should not be a number in our example here. I prefer not to call them null to avoid confusion with how null can be implemented in any specific language. Equally I prefer to avoid nothing because it is often associated with null, which, in principle, is what null should do, however, that principle often gets bended for whatever practical reasons.
What is exception exactly?
This is a trivial matter for any experienced programmer but I'd like to drop few words just to extinguish any worm of confusion:
Exception is an object encapsulating information about how the invalid result of execution occurred.
This can range from throwing away any details and returning a single global value (like NaN or null) or generating a long log list or what exactly happened, send it to a database and replicating all over the distributed data storage layer ;)
The important difference between these two extreme examples of exception is that in the first case there are no side-effects. In the second there are. Which brings us to the (thousand-dollar) question:
Are exceptions allowed in pure functions?
Shorter answer: Yes, but only when they don't lead to side-effects.
Longer answer. To be pure, your function's output must be uniquely determined by its input. So we amend our function f by sending 0 to the new abstract value e that we call exception. We make sure that value e contains no outside information that is not uniquely determined by our input, which is x. So here is an example of exception without side-effect:
e = {
type: error,
message: 'I got error trying to divide 1 by 0'
}
And here is one with side-effect:
e = {
type: error,
message: 'Our committee to decide what is 1/0 is currently away'
}
Actually, it only has side-effects if that message can possibly change in the future. But if it is guaranteed to never change, that value becomes uniquely predictable, and so there is no side-effect.
To make it even sillier. A function returning 42 ever is clearly pure. But if someone crazy decides to make 42 a variable that value might change, the very same function stops being pure under the new conditions.
Note that I am using the object literal notation for simplicity to demonstrate the essence. Unfortunately things are messed-up in languages like JavaScript, where error is not a type that behaves the way we want here with respect to function composition, whereas actual types like null or NaN do not behave this way but rather go through the some artificial and not always intuitive type conversions.
Type extension
As we want to vary the message inside our exception, we are really declaring a new type E for the whole exception object and then
That is what the maybe number does, apart from its confusing name, which is to be either of type number or of the new exception type E, so it is really the union number | E of number and E. In particular, it depends on how we want to construct E, which is neither suggested nor reflected in the name maybe number.
What is functional composition?
It is the mathematical operation taking functions
f: X -> Y and g: Y -> Z and constructing
their composition as function h: X -> Z satisfying h(x) = g(f(x)).
The problem with this definition occurs when the result f(x) is not allowed as argument of g.
In mathematics those functions cannot be composed without extra work.
The strictly mathematical solution for our above example of f and g is to remove 0 from the set of definition of f. With that new set of definition (new more restrictive type of x), f becomes composable with g.
However, it is not very practical in programming to restrict the set of definition of f like that. Instead, exceptions can be used.
Or as another approach, artificial values are created like NaN, undefined, null, Infinity etc. So you evaluate 1/0 to Infinity and 1/-0 to -Infinity. And then force the new value back into your expression instead of throwing exception. Leading to results you may or may not find predictable:
1/0 // => Infinity
parseInt(Infinity) // => NaN
NaN < 0 // => false
false + 1 // => 1
And we are back to regular numbers ready to move on ;)
JavaScript allows us to keep executing numerical expressions at any costs without throwing errors as in the above example. That means, it also allows to compose functions. Which is exactly what monad is about - it is a rule to compose functions satisfying the axioms as defined at the beginning of this answer.
But is the rule of composing function, arising from JavaScript's implementation for dealing with numerical errors, a monad?
To answer this question, all you need is to check the axioms (left as exercise as not part of the question here;).
Can throwing exception be used to construct a monad?
Indeed, a more useful monad would instead be the rule prescribing
that if f throws exception for some x, so does its composition with any g. Plus make the exception E globally unique with only one possible value ever (terminal object in category theory). Now the two axioms are instantly checkable and we get a very useful monad. And the result is what is well-known as the maybe monad.
A monad is a data type that encapsulates a value, and to which, essentially, two operations can be applied:
return x creates a value of the monad type that encapsulates x
m >>= f (read it as "the bind operator") applies the function f to the value in the monad m
That's what a monad is. There are a few more technicalities, but basically those two operations define a monad. The real question is, "What a monad does?", and that depends on the monad — lists are monads, Maybes are monads, IO operations are monads. All that it means when we say those things are monads is that they have the monad interface of return and >>=.
From wikipedia:
In functional programming, a monad is
a kind of abstract data type used to
represent computations (instead of
data in the domain model). Monads
allow the programmer to chain actions
together to build a pipeline, in which
each action is decorated with
additional processing rules provided
by the monad. Programs written in
functional style can make use of
monads to structure procedures that
include sequenced operations,1[2]
or to define arbitrary control flows
(like handling concurrency,
continuations, or exceptions).
Formally, a monad is constructed by
defining two operations (bind and
return) and a type constructor M that
must fulfill several properties to
allow the correct composition of
monadic functions (i.e. functions that
use values from the monad as their
arguments). The return operation takes
a value from a plain type and puts it
into a monadic container of type M.
The bind operation performs the
reverse process, extracting the
original value from the container and
passing it to the associated next
function in the pipeline.
A programmer will compose monadic
functions to define a data-processing
pipeline. The monad acts as a
framework, as it's a reusable behavior
that decides the order in which the
specific monadic functions in the
pipeline are called, and manages all
the undercover work required by the
computation.[3] The bind and return
operators interleaved in the pipeline
will be executed after each monadic
function returns control, and will
take care of the particular aspects
handled by the monad.
I believe it explains it very well.
I'll try to make the shortest definition I can manage using OOP terms:
A generic class CMonadic<T> is a monad if it defines at least the following methods:
class CMonadic<T> {
static CMonadic<T> create(T t); // a.k.a., "return" in Haskell
public CMonadic<U> flatMap<U>(Func<T, CMonadic<U>> f); // a.k.a. "bind" in Haskell
}
and if the following laws apply for all types T and their possible values t
left identity:
CMonadic<T>.create(t).flatMap(f) == f(t)
right identity
instance.flatMap(CMonadic<T>.create) == instance
associativity:
instance.flatMap(f).flatMap(g) == instance.flatMap(t => f(t).flatMap(g))
Examples:
A List monad may have:
List<int>.create(1) --> [1]
And flatMap on the list [1,2,3] could work like so:
intList.flatMap(x => List<int>.makeFromTwoItems(x, x*10)) --> [1,10,2,20,3,30]
Iterables and Observables can also be made monadic, as well as Promises and Tasks.
Commentary:
Monads are not that complicated. The flatMap function is a lot like the more commonly encountered map. It receives a function argument (also known as delegate), which it may call (immediately or later, zero or more times) with a value coming from the generic class. It expects that passed function to also wrap its return value in the same kind of generic class. To help with that, it provides create, a constructor that can create an instance of that generic class from a value. The return result of flatMap is also a generic class of the same type, often packing the same values that were contained in the return results of one or more applications of flatMap to the previously contained values. This allows you to chain flatMap as much as you want:
intList.flatMap(x => List<int>.makeFromTwo(x, x*10))
.flatMap(x => x % 3 == 0
? List<string>.create("x = " + x.toString())
: List<string>.empty())
It just so happens that this kind of generic class is useful as a base model for a huge number of things. This (together with the category theory jargonisms) is the reason why Monads seem so hard to understand or explain. They're a very abstract thing and only become obviously useful once they're specialized.
For example, you can model exceptions using monadic containers. Each container will either contain the result of the operation or the error that has occured. The next function (delegate) in the chain of flatMap callbacks will only be called if the previous one packed a value in the container. Otherwise if an error was packed, the error will continue to propagate through the chained containers until a container is found that has an error handler function attached via a method called .orElse() (such a method would be an allowed extension)
Notes: Functional languages allow you to write functions that can operate on any kind of a monadic generic class. For this to work, one would have to write a generic interface for monads. I don't know if its possible to write such an interface in C#, but as far as I know it isn't:
interface IMonad<T> {
static IMonad<T> create(T t); // not allowed
public IMonad<U> flatMap<U>(Func<T, IMonad<U>> f); // not specific enough,
// because the function must return the same kind of monad, not just any monad
}
Whether a monad has a "natural" interpretation in OO depends on the monad. In a language like Java, you can translate the maybe monad to the language of checking for null pointers, so that computations that fail (i.e., produce Nothing in Haskell) emit null pointers as results. You can translate the state monad into the language generated by creating a mutable variable and methods to change its state.
A monad is a monoid in the category of endofunctors.
The information that sentence puts together is very deep. And you work in a monad with any imperative language. A monad is a "sequenced" domain specific language. It satisfies certain interesting properties, which taken together make a monad a mathematical model of "imperative programming". Haskell makes it easy to define small (or large) imperative languages, which can be combined in a variety of ways.
As an OO programmer, you use your language's class hierarchy to organize the kinds of functions or procedures that can be called in a context, what you call an object. A monad is also an abstraction on this idea, insofar as different monads can be combined in arbitrary ways, effectively "importing" all of the sub-monad's methods into the scope.
Architecturally, one then uses type signatures to explicitly express which contexts may be used for computing a value.
One can use monad transformers for this purpose, and there is a high quality collection of all of the "standard" monads:
Lists (non-deterministic computations, by treating a list as a domain)
Maybe (computations that can fail, but for which reporting is unimportant)
Error (computations that can fail and require exception handling
Reader (computations that can be represented by compositions of plain Haskell functions)
Writer (computations with sequential "rendering"/"logging" (to strings, html etc)
Cont (continuations)
IO (computations that depend on the underlying computer system)
State (computations whose context contains a modifiable value)
with corresponding monad transformers and type classes. Type classes allow a complementary approach to combining monads by unifying their interfaces, so that concrete monads can implement a standard interface for the monad "kind". For example, the module Control.Monad.State contains a class MonadState s m, and (State s) is an instance of the form
instance MonadState s (State s) where
put = ...
get = ...
The long story is that a monad is a functor which attaches "context" to a value, which has a way to inject a value into the monad, and which has a way to evaluate values with respect to the context attached to it, at least in a restricted way.
So:
return :: a -> m a
is a function which injects a value of type a into a monad "action" of type m a.
(>>=) :: m a -> (a -> m b) -> m b
is a function which takes a monad action, evaluates its result, and applies a function to the result. The neat thing about (>>=) is that the result is in the same monad. In other words, in m >>= f, (>>=) pulls the result out of m, and binds it to f, so that the result is in the monad. (Alternatively, we can say that (>>=) pulls f into m and applies it to the result.) As a consequence, if we have f :: a -> m b, and g :: b -> m c, we can "sequence" actions:
m >>= f >>= g
Or, using "do notation"
do x <- m
y <- f x
g y
The type for (>>) might be illuminating. It is
(>>) :: m a -> m b -> m b
It corresponds to the (;) operator in procedural languages like C. It allows do notation like:
m = do x <- someQuery
someAction x
theNextAction
andSoOn
In mathematical and philosopical logic, we have frames and models, which are "naturally" modelled with monadism. An interpretation is a function which looks into the model's domain and computes the truth value (or generalizations) of a proposition (or formula, under generalizations). In a modal logic for necessity, we might say that a proposition is necessary if it is true in "every possible world" -- if it is true with respect to every admissible domain. This means that a model in a language for a proposition can be reified as a model whose domain consists of collection of distinct models (one corresponding to each possible world). Every monad has a method named "join" which flattens layers, which implies that every monad action whose result is a monad action can be embedded in the monad.
join :: m (m a) -> m a
More importantly, it means that the monad is closed under the "layer stacking" operation. This is how monad transformers work: they combine monads by providing "join-like" methods for types like
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
so that we can transform an action in (MaybeT m) into an action in m, effectively collapsing layers. In this case, runMaybeT :: MaybeT m a -> m (Maybe a) is our join-like method. (MaybeT m) is a monad, and MaybeT :: m (Maybe a) -> MaybeT m a is effectively a constructor for a new type of monad action in m.
A free monad for a functor is the monad generated by stacking f, with the implication that every sequence of constructors for f is an element of the free monad (or, more exactly, something with the same shape as the tree of sequences of constructors for f). Free monads are a useful technique for constructing flexible monads with a minimal amount of boiler-plate. In a Haskell program, I might use free monads to define simple monads for "high level system programming" to help maintain type safety (I'm just using types and their declarations. Implementations are straight-forward with the use of combinators):
data RandomF r a = GetRandom (r -> a) deriving Functor
type Random r a = Free (RandomF r) a
type RandomT m a = Random (m a) (m a) -- model randomness in a monad by computing random monad elements.
getRandom :: Random r r
runRandomIO :: Random r a -> IO a (use some kind of IO-based backend to run)
runRandomIO' :: Random r a -> IO a (use some other kind of IO-based backend)
runRandomList :: Random r a -> [a] (some kind of list-based backend (for pseudo-randoms))
Monadism is the underlying architecture for what you might call the "interpreter" or "command" pattern, abstracted to its clearest form, since every monadic computation must be "run", at least trivially. (The runtime system runs the IO monad for us, and is the entry point to any Haskell program. IO "drives" the rest of the computations, by running IO actions in order).
The type for join is also where we get the statement that a monad is a monoid in the category of endofunctors. Join is typically more important for theoretical purposes, in virtue of its type. But understanding the type means understanding monads. Join and monad transformer's join-like types are effectively compositions of endofunctors, in the sense of function composition. To put it in a Haskell-like pseudo-language,
Foo :: m (m a) <-> (m . m) a
Quick explanation:
Monads (in functional programming) are functions with context-dependent behaviour.
The context is passed as argument, being returned from a previous call of that monad. It makes it look like the same argument produces a different return value on subsequent calls.
Equivalent:
Monads are functions whose actual arguments depend on past calls of a call chain.
Typical example: Stateful functions.
FAQ
Wait, what do you mean with "behaviour"?
Behaviour means the return value and side effects that you get for specific inputs.
But what's so special about them?
In procedural semantics: nothing. But they are modelled solely using pure functions. It's because pure functional programming languages like Haskell only use pure functions which are not stateful by themselves.
But then, where comes the state from?
The statefulness comes from the sequentialness of the function-call execution. It allows nested functions to drag certain arguments around through multiple function calls. This simulates state. The monad is just a software pattern to hide these additional arguments behind return values of shiny functions, often called return and bind.
Why is input/output a monad in Haskell?
Because displayed text is a state in your operating system. If you read or write the same text multiple times, the state of the operating system will not be equal after each call. Instead, your output device will show 3 times the text output. For proper reactions to the OS, Haskell needs to model the OS state for itself as a monad.
Technically, you don't need the monad definition. Purely functional languages can use the idea of "uniqueness type"s for the same purpose.
Do monads exist in non-functional languages?
Yes, basically an interpreter is a complex monad, interpreting each instruction and mapping it to a new state in the OS.
Long explanation:
A monad (in functional programming) is a pure-functional software pattern. A monad is an automatically maintained environment (an object) in which a chain of pure function calls can be executed. The function results modify or interact with that environment.
In other words, a monad is a "function-repeater" or "function-chainer" which is chaining and evaluating argument values within an automatically maintained environment. Often the chained argument values are "update-functions" but actually could be any objects (with methods, or container elements which make up a container). The monad is the "glue code" executed before and after each evaluated argument. This glue code function "bind" is supposed to integrate each argument's environment output into the original environment.
Thus, the monad concatenates the results of all arguments in a way that is implementation-specific to a particular monad. Whether or how control and data flows between the arguments is also implementation-specific.
This intertwinned execution allows to model complete imperative control flow (as in a GOTO-program) or parallel execution with only pure functions, but also side effects, temporary state or exception handling between the function calls even though the applied functions don't know about the external environment.
EDIT: Note that monads can evaluate the function chain in any kind of control flow graph, even non-deterministic NFA-like manner because the remaining chain is evaluated lazily and can be evaluated multiple times at each point of the chain which allows for backtracking in the chain.
The reason to use the monad concept is the pure-functional paradigm which needs a tool to simulate typically impurely modelled behaviour in a pure way, not because they do something special.
Monads for OOP people
In OOP a monad is a typical object with
a constructor often called return that turns a value into an initial instance of the environment
a chainable argument application method often called bind which maintains the object's state with the returned environment of a function passed as argument.
Some people also mention a third function join which is part of bind. Because the "argument-functions" are evalutated within the environment, their result is nested in the environment itself. join is the last step to "un-nest" the result (flattens the environment) to replace the environment with a new one.
A monad can implement the Builder pattern but allows for much more general use.
Example (Python)
I think the most intuitive example for monads are relational operators from Python:
result = 0 <= x == y < 3
You see that it is a monad because it has to carry along some boolean state which is not known by individual relational operator calls.
If you think about how to implement it without short-circuiting behaviour on low level then you exactly will get a monad implementation:
# result = ret(0)
result = (0, true)
# result = result.bind(lambda v: (x, v <= x))
result[1] = result[1] and result[0] <= x
result[0] = x
# result = result.bind(lambda v: (y, v == y))
result[1] = result[1] and result[0] == y
result[0] = y
# result = result.bind(lambda v: (3, v < 3))
result[1] = result[1] and result[0] < 3
result[0] = 3
result = result[1] # not explicit part of a monad
A real monad would compute every argument at most once.
Now think away the "result" variable and you get this chain:
ret(0) .bind (lambda v: v <= x) .bind (lambda v: v == y) .bind (lambda v: v < 3)
Monads in typical usage are the functional equivalent of procedural programming's exception handling mechanisms.
In modern procedural languages, you put an exception handler around a sequence of statements, any of which may throw an exception. If any of the statements throws an exception, normal execution of the sequence of statements halts and transfers to an exception handler.
Functional programming languages, however, philosophically avoid exception handling features due to the "goto" like nature of them. The functional programming perspective is that functions should not have "side-effects" like exceptions that disrupt program flow.
In reality, side-effects cannot be ruled out in the real world due primarily to I/O. Monads in functional programming are used to handle this by taking a set of chained function calls (any of which might produce an unexpected result) and turning any unexpected result into encapsulated data that can still flow safely through the remaining function calls.
The flow of control is preserved but the unexpected event is safely encapsulated and handled.
In OO terms, a monad is a fluent container.
The minimum requirement is a definition of class <A> Something that supports a constructor Something(A a) and at least one method Something<B> flatMap(Function<A, Something<B>>)
Arguably, it also counts if your monad class has any methods with signature Something<B> work() which preserves the class's rules -- the compiler bakes in flatMap at compile time.
Why is a monad useful? Because it is a container that allows chain-able operations that preserve semantics. For example, Optional<?> preserves the semantics of isPresent for Optional<String>, Optional<Integer>, Optional<MyClass>, etc.
As a rough example,
Something<Integer> i = new Something("a")
.flatMap(doOneThing)
.flatMap(doAnother)
.flatMap(toInt)
Note we start with a string and end with an integer. Pretty cool.
In OO, it might take a little hand-waving, but any method on Something that returns another subclass of Something meets the criterion of a container function that returns a container of the original type.
That's how you preserve semantics -- i.e. the container's meaning and operations don't change, they just wrap and enhance the object inside the container.
I am sharing my understanding of Monads, which may not be theoretically perfect. Monads are about Context propagation. Monad is, you define some context for some data (or data type(s)), and then define how that context will be carried with the data throughout its processing pipeline. And defining context propagation is mostly about defining how to merge multiple contexts (of same type). Using Monads also means ensuring these contexts are not accidentally stripped off from the data. On the other hand, other context-less data can be brought into a new or existing context. Then this simple concept can be used to ensure compile time correctness of a program.
A monad is an array of functions
(Pst: an array of functions is just a computation).
Actually, instead of a true array (one function in one cell array) you have those functions chained by another function >>=. The >>= allows to adapt the results from function i to feed function i+1, perform calculations between them
or, even, not to call function i+1.
The types used here are "types with context". This is, a value with a "tag".
The functions being chained must take a "naked value" and return a tagged result.
One of the duties of >>= is to extract a naked value out of its context.
There is also the function "return", that takes a naked value and puts it with a tag.
An example with Maybe. Let's use it to store a simple integer on which make calculations.
-- a * b
multiply :: Int -> Int -> Maybe Int
multiply a b = return (a*b)
-- divideBy 5 100 = 100 / 5
divideBy :: Int -> Int -> Maybe Int
divideBy 0 _ = Nothing -- dividing by 0 gives NOTHING
divideBy denom num = return (quot num denom) -- quotient of num / denom
-- tagged value
val1 = Just 160
-- array of functions feeded with val1
array1 = val1 >>= divideBy 2 >>= multiply 3 >>= divideBy 4 >>= multiply 3
-- array of funcionts created with the do notation
-- equals array1 but for the feeded val1
array2 :: Int -> Maybe Int
array2 n = do
v <- divideBy 2 n
v <- multiply 3 v
v <- divideBy 4 v
v <- multiply 3 v
return v
-- array of functions,
-- the first >>= performs 160 / 0, returning Nothing
-- the second >>= has to perform Nothing >>= multiply 3 ....
-- and simply returns Nothing without calling multiply 3 ....
array3 = val1 >>= divideBy 0 >>= multiply 3 >>= divideBy 4 >>= multiply 3
main = do
print array1
print (array2 160)
print array3
Just to show that monads are array of functions with helper operations, consider
the equivalent to the above example, just using a real array of functions
type MyMonad = [Int -> Maybe Int] -- my monad as a real array of functions
myArray1 = [divideBy 2, multiply 3, divideBy 4, multiply 3]
-- function for the machinery of executing each function i with the result provided by function i-1
runMyMonad :: Maybe Int -> MyMonad -> Maybe Int
runMyMonad val [] = val
runMyMonad Nothing _ = Nothing
runMyMonad (Just val) (f:fs) = runMyMonad (f val) fs
And it would be used like this:
print (runMyMonad (Just 160) myArray1)
If you've ever used Powershell, the patterns Eric described should sound familiar. Powershell cmdlets are monads; functional composition is represented by a pipeline.
Jeffrey Snover's interview with Erik Meijer goes into more detail.
From a practical point of view (summarizing what has been said in many previous answers and related articles), it seems to me that one of the fundamental "purposes" (or usefulness) of the monad is to leverage the dependencies implicit in recursive method invocations aka function composition (i.e. when f1 calls f2 calls f3, f3 needs to be evaluated before f2 before f1) to represent sequential composition in a natural way, especially in the context of a lazy evaluation model (that is, sequential composition as a plain sequence, e.g. "f3(); f2(); f1();" in C - the trick is especially obvious if you think of a case where f3, f2 and f1 actually return nothing [their chaining as f1(f2(f3)) is artificial, purely intended to create sequence]).
This is especially relevant when side-effects are involved, i.e. when some state is altered (if f1, f2, f3 had no side-effects, it wouldn't matter in what order they're evaluated; which is a great property of pure functional languages, to be able to parallelize those computations for example). The more pure functions, the better.
I think from that narrow point of view, monads could be seen as syntactic sugar for languages that favor lazy evaluation (that evaluate things only when absolutely necessary, following an order that does not rely on the presentation of the code), and that have no other means of representing sequential composition. The net result is that sections of code that are "impure" (i.e. that do have side-effects) can be presented naturally, in an imperative manner, yet are cleanly separated from pure functions (with no side-effects), which can be evaluated lazily.
This is only one aspect though, as warned here.
A simple Monads explanation with a Marvel's case study is here.
Monads are abstractions used to sequence dependent functions that are effectful. Effectful here means they return a type in form F[A] for example Option[A] where Option is F, called type constructor. Let's see this in 2 simple steps
Below Function composition is transitive. So to go from A to C I can compose A => B and B => C.
A => C = A => B andThen B => C
However, if the function returns an effect type like Option[A] i.e. A => F[B] the composition doesn't work as to go to B we need A => B but we have A => F[B].
We need a special operator, "bind" that knows how to fuse these functions that return F[A].
A => F[C] = A => F[B] bind B => F[C]
The "bind" function is defined for the specific F.
There is also "return", of type A => F[A] for any A, defined for that specific F also. To be a Monad, F must have these two functions defined for it.
Thus we can construct an effectful function A => F[B] from any pure function A => B,
A => F[B] = A => B andThen return
but a given F can also define its own opaque "built-in" special functions of such types that a user can't define themself (in a pure language), like
"random" (Range => Random[Int])
"print" (String => IO[ () ])
"try ... catch", etc.
The simplest explanation I can think of is that monads are a way of composing functions with embelished results (aka Kleisli composition). An "embelished" function has the signature a -> (b, smth) where a and b are types (think Int, Bool) that might be different from each other, but not necessarily - and smth is the "context" or the "embelishment".
This type of functions can also be written a -> m b where m is equivalent to the "embelishment" smth. So these are functions that return values in context (think functions that log their actions, where smth is the logging message; or functions that perform input\output and their results depends on the result of the IO action).
A monad is an interface ("typeclass") that makes the implementer tell it how to compose such functions. The implementer needs to define a composition function (a -> m b) -> (b -> m c) -> (a -> m c) for any type m that wants to implement the interface (this is the Kleisli composition).
So, if we say that we have a tuple type (Int, String) representing results of computations on Ints that also log their actions, with (_, String) being the "embelishment" - the log of the action - and two functions increment :: Int -> (Int, String) and twoTimes :: Int -> (Int, String) we want to obtain a function incrementThenDouble :: Int -> (Int, String) which is the composition of the two functions that also takes into account the logs.
On the given example, a monad implementation of the two functions applies to integer value 2 incrementThenDouble 2 (which is equal to twoTimes (increment 2)) would return (6, " Adding 1. Doubling 3.") for intermediary results increment 2 equal to (3, " Adding 1.") and twoTimes 3 equal to (6, " Doubling 3.")
From this Kleisli composition function one can derive the usual monadic functions.
See my answer to "What is a monad?"
It begins with a motivating example, works through the example, derives an example of a monad, and formally defines "monad".
It assumes no knowledge of functional programming and it uses pseudocode with function(argument) := expression syntax with the simplest possible expressions.
This C++ program is an implementation of the pseudocode monad. (For reference: M is the type constructor, feed is the "bind" operation, and wrap is the "return" operation.)
#include <iostream>
#include <string>
template <class A> class M
{
public:
A val;
std::string messages;
};
template <class A, class B>
M<B> feed(M<B> (*f)(A), M<A> x)
{
M<B> m = f(x.val);
m.messages = x.messages + m.messages;
return m;
}
template <class A>
M<A> wrap(A x)
{
M<A> m;
m.val = x;
m.messages = "";
return m;
}
class T {};
class U {};
class V {};
M<U> g(V x)
{
M<U> m;
m.messages = "called g.\n";
return m;
}
M<T> f(U x)
{
M<T> m;
m.messages = "called f.\n";
return m;
}
int main()
{
V x;
M<T> m = feed(f, feed(g, wrap(x)));
std::cout << m.messages;
}
optional/maybe is the most fundamental monadic type
Monads are about function composition. If you have functions f:optional<A>->optional<B>, g:optional<B>->optional<C>,h:optional<C>->optional<D>. Then you could compose them
optional<A> opt;
h(g(f(opt)));
The benefit of monad types, is that you can instead compose f:A->optional<B>, g:B->optional<C>,h:C->optional<D>. They can do this because the monadic interface provides the bind operator
auto optional<A>::bind(A->optional<B>)->optional<B>
and the composition could be written
optional<A> opt
opt.bind(f)
.bind(g)
.bind(h)
The benefit of monads is that we no longer have to handle the logic of if(!opt) return nullopt; in each of f,g,h because this logic is moved into the bind operator.
ranges/lists/iterables are the second most fundamental monad type.
The monadic feature of ranges is we can transform then flatten i.e. Starting with a sentance enncoded as a range of integers [36, 98]
we can transform to [['m','a','c','h','i','n','e',' '], ['l','e','a','r','n','i','n','g', '.']]
and then flatten ['m','a','c','h','i','n','e', ' ', 'l','e','a','r','n','i','n','g','.']
Instead of writing this code
vector<string> lookup_table;
auto stringify(vector<unsigned> rng) -> vector<char>
{
vector<char> result;
for(unsigned key : rng)
for(char ch : lookup_table[key])
result.push_back(ch);
result.push_back(' ')
result.push_back('.')
return result
}
we could write write this
auto f(unsigned key) -> vector<char>
{
vector<char> result;
for(ch : lookup_table[key])
result.push_back(ch);
return result
}
auto stringify(vector<unsigned> rng) -> vector<char>
{
return rng.bind(f);
}
The monad pushes the for loop for(unsigned key : rng) up into the bind function, allowing for code that is easier to reason about, theoretically. Pythagorean triples can be generated in range-v3 with nested binds (rather than chained binds as we saw with optional)
auto triples =
for_each(ints(1), [](int z) {
return for_each(ints(1, z), [=](int x) {
return for_each(ints(x, z), [=](int y) {
return yield_if(x*x + y*y == z*z, std::make_tuple(x, y, z));
});
});
});