What is a monad?

2018-12-31 20:01发布

问题:

Having briefly looked at Haskell recently, what would be a brief, succinct, practical explanation as to what a monad essentially is?

I have found most explanations I\'ve come across to be fairly inaccessible and lacking in practical detail.

回答1:

First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.

You ask for practical examples:

Example 1: List comprehension:

[x*2 | x<-[1..10], odd x]

This expression returns the doubles of all odd numbers in the range from 1 to 10. Very useful!

It turns out this is really just syntactic sugar for some operations within the List monad. The same list comprehension can be written as:

do
   x <- [1..10]
   guard (odd x)
   return (x * 2)

Or even:

[1..10] >>= (\\x -> guard (odd x) >> return (x*2))

Example 2: Input/Output:

do
   putStrLn \"What is your name?\"
   name <- getLine
   putStrLn (\"Welcome, \" ++ name ++ \"!\")

Both examples use monads, AKA computation builders. The common theme is that the monad chains operations in some specific, useful way. In the list comprehension, the operations are chained such that if an operation returns a list, then the following operations are performed on every item in the list. The IO monad on the other hand performs the operations sequentially, but passes a \"hidden variable\" along, which represents \"the state of the world\", which allows us to write I/O code in a pure functional manner.

It turns out the pattern of chaining operations is quite useful and is used for lots of different things in Haskell.

Another example is exceptions: Using the Error monad, operations are chained such that they are performed sequentially, except if an error is thrown, in which case the rest of the chain is abandoned.

Both the list-comprehension syntax and the do-notation are syntactic sugar for chaining operations using the >>= operator. A monad is basically just a type that supports the >>= operator.

Example 3: A parser

This is a very simple parser which parses either a quoted string or a number:

parseExpr = parseString <|> parseNumber

parseString = do
        char \'\"\'
        x <- many (noneOf \"\\\"\")
        char \'\"\'
        return (StringValue x)

parseNumber = do
    num <- many1 digit
    return (NumberValue (read num))

The operations char, digit, etc. are pretty simple. They either match or don\'t match. The magic is the monad which manages the control flow: The operations are performed sequentially until a match fails, in which case the monad backtracks to the latest <|> and tries the next option. Again, a way of chaining operations with some additional, useful semantics.

Example 4: Asynchronous programming

The above examples are in Haskell, but it turns out F# also supports monads. This example is stolen from Don Syme:

let AsyncHttp(url:string) =
    async {  let req = WebRequest.Create(url)
             let! rsp = req.GetResponseAsync()
             use stream = rsp.GetResponseStream()
             use reader = new System.IO.StreamReader(stream)
             return reader.ReadToEnd() }

This method fetches a web page. The punch line is the use of GetResponseAsync - it actually waits for the response on a separate thread, while the main thread returns from the function. The last three lines are executed on the spawned thread when the response have been received.

In most other languages you would have to explicitly create a separate function for the lines that handle the response. The async monad is able to \"split\" the block on its own and postpone the execution of the latter half. (The async {} syntax indicates that the control flow in the block is defined by the async monad.)

How they work

So how can a monad do all these fancy control-flow thing? What actually happens in a do-block (or a computation expression as they are called in F#), is that every operation (basically every line) is wrapped in a separate anonymous function. These functions are then combined using the bind operator (spelled >>= in Haskell). Since the bind operation combines functions, it can execute them as it sees fit: sequentially, multiple times, in reverse, discard some, execute some on a separate thread when it feels like it and so on.

As an example, this is the expanded version of the IO-code from example 2:

putStrLn \"What is your name?\"
>>= (\\_ -> getLine)
>>= (\\name -> putStrLn (\"Welcome, \" ++ name ++ \"!\"))

This is uglier, but it\'s also more obvious what is actually going on. The >>= operator is the magic ingredient: It takes a value (on the left side) and combines it with a function (on the right side), to produce a new value. This new value is then taken by the next >>= operator and again combined with a function to produce a new value. >>= can be viewed as a mini-evaluator.

Note that >>= is overloaded for different types, so every monad has its own implementation of >>=. (All the operations in the chain have to be of the type of the same monad though, otherwise the >>= operator won\'t work.)

The simplest possible implementation of >>= just takes the value on the left and applies it to the function on the right and returns the result, but as said before, what makes the whole pattern useful is when there is something extra going on in the monad\'s implementation of >>=.

There is some additional cleverness in how the values are passed from one operation to the next, but this requires a deeper explanation of the Haskell type system.

Summing up

In Haskell-terms a monad is a parameterized type which is an instance of the Monad type class, which defines >>= along with a few other operators. In layman\'s terms, a monad is just a type for which the >>= operation is defined.

In itself >>= is just a cumbersome way of chaining functions, but with the presence of the do-notation which hides the \"plumbing\", the monadic operations turns out to be a very nice and useful abstraction, useful many places in the language, and useful for creating your own mini-languages in the language.

Why are monads hard?

For many Haskell-learners, monads are an obstacle they hit like a brick wall. It\'s not that monads themselves are complex, but that the implementation relies on many other advanced Haskell features like parameterized types, type classes, and so on. The problem is that Haskell I/O is based on monads, and I/O is probably one of the first things you want to understand when learning a new language - after all, it\'s not much fun to create programs which don\'t produce any output. I have no immediate solution for this chicken-and-egg problem, except treating I/O like \"magic happens here\" until you have enough experience with other parts of language. Sorry.

Excellent blog on monads: http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html



回答2:

Explaining \"what is a monad\" is a bit like saying \"what is a number?\" We use numbers all the time. But imagine you met someone who didn\'t know anything about numbers. How the heck would you explain what numbers are? And how would you even begin to describe why that might be useful?

What is a monad? The short answer: It\'s a specific way of chaining operations together.

In essence, you\'re writing execution steps and linking them together with the \"bind function\". (In Haskell, it\'s named >>=.) You can write the calls to the bind operator yourself, or you can use syntax sugar which makes the compiler insert those function calls for you. But either way, each step is separated by a call to this bind function.

So the bind function is like a semicolon; it separates the steps in a process. The bind function\'s job is to take the output from the previous step, and feed it into the next step.

That doesn\'t sound too hard, right? But there is more than one kind of monad. Why? How?

Well, the bind function can just take the result from one step, and feed it to the next step. But if that\'s \"all\" the monad does... that actually isn\'t very useful. And that\'s important to understand: Every useful monad does something else in addition to just being a monad. Every useful monad has a \"special power\", which makes it unique.

(A monad that does nothing special is called the \"identity monad\". Rather like the identity function, this sounds like an utterly pointless thing, yet turns out not to be... But that\'s another story™.)

Basically, each monad has its own implementation of the bind function. And you can write a bind function such that it does hoopy things between execution steps. For example:

  • If each step returns a success/failure indicator, you can have bind execute the next step only if the previous one succeeded. In this way, a failing step aborts the whole sequence \"automatically\", without any conditional testing from you. (The Failure Monad.)

  • Extending this idea, you can implement \"exceptions\". (The Error Monad or Exception Monad.) Because you\'re defining them yourself rather than it being a language feature, you can define how they work. (E.g., maybe you want to ignore the first two exceptions and only abort when a third exception is thrown.)

  • You can make each step return multiple results, and have the bind function loop over them, feeding each one into the next step for you. In this way, you don\'t have to keep writing loops all over the place when dealing with multiple results. The bind function \"automatically\" does all that for you. (The List Monad.)

  • As well as passing a \"result\" from one step to another, you can have the bind function pass extra data around as well. This data now doesn\'t show up in your source code, but you can still access it from anywhere, without having to manually pass it to every function. (The Reader Monad.)

  • You can make it so that the \"extra data\" can be replaced. This allows you to simulate destructive updates, without actually doing destructive updates. (The State Monad and its cousin the Writer Monad.)

  • Because you\'re only simulating destructive updates, you can trivially do things that would be impossible with real destructive updates. For example, you can undo the last update, or revert to an older version.

  • You can make a monad where calculations can be paused, so you can pause your program, go in and tinker with internal state data, and then resume it.

  • You can implement \"continuations\" as a monad. This allows you to break people\'s minds!

All of this and more is possible with monads. Of course, all of this is also perfectly possible without monads too. It\'s just drastically easier using monads.



回答3:

Actually, contrary to common understanding of Monads, they have nothing to do with state. Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.

For example, you can create a type to wrap another one, in Haskell:

data Wrapped a = Wrap a

To wrap stuff we define

return :: a -> Wrapped a
return x = Wrap x

To perform operations without unwrapping, say you have a function f :: a -> b, then you can do this to lift that function to act on wrapped values:

fmap :: (a -> b) -> (Wrapped a -> Wrapped b)
fmap f (Wrap x) = Wrap (f x)

That\'s about all there is to understand. However, it turns out that there is a more general function to do this lifting, which is bind:

bind :: (a -> Wrapped b) -> (Wrapped a -> Wrapped b)
bind f (Wrap x) = f x

bind can do a bit more than fmap, but not vice versa. Actually, fmap can be defined only in terms of bind and return. So, when defining a monad.. you give its type (here it was Wrapped a) and then say how its return and bind operations work.

The cool thing is that this turns out to be such a general pattern that it pops up all over the place, encapsulating state in a pure way is only one of them.

For a good article on how monads can be used to introduce functional dependencies and thus control order of evaluation, like it is used in Haskell\'s IO monad, check out IO Inside.

As for understanding monads, don\'t worry too much about it. Read about them what you find interesting and don\'t worry if you don\'t understand right away. Then just diving in a language like Haskell is the way to go. Monads are one of these things where understanding trickles into your brain by practice, one day you just suddenly realize you understand them.



回答4:

But, You could have invented Monads!

sigfpe says:

But all of these introduce monads as something esoteric in need of explanation. But what I want to argue is that they aren\'t esoteric at all. In fact, faced with various problems in functional programming you would have been led, inexorably, to certain solutions, all of which are examples of monads. In fact, I hope to get you to invent them now if you haven\'t already. It\'s then a small step to notice that all of these solutions are in fact the same solution in disguise. And after reading this, you might be in a better position to understand other documents on monads because you\'ll recognise everything you see as something you\'ve already invented.

Many of the problems that monads try to solve are related to the issue of side effects. So we\'ll start with them. (Note that monads let you do more than handle side-effects, in particular many types of container object can be viewed as monads. Some of the introductions to monads find it hard to reconcile these two different uses of monads and concentrate on just one or the other.)

In an imperative programming language such as C++, functions behave nothing like the functions of mathematics. For example, suppose we have a C++ function that takes a single floating point argument and returns a floating point result. Superficially it might seem a little like a mathematical function mapping reals to reals, but a C++ function can do more than just return a number that depends on its arguments. It can read and write the values of global variables as well as writing output to the screen and receiving input from the user. In a pure functional language, however, a function can only read what is supplied to it in its arguments and the only way it can have an effect on the world is through the values it returns.



回答5:

A monad is a datatype that has two operations: >>= (aka bind) and return (aka unit). return takes an arbitrary value and creates an instance of the monad with it. >>= takes an instance of the monad and maps a function over it. (You can see already that a monad is a strange kind of datatype, since in most programming languages you couldn\'t write a function that takes an arbitrary value and creates a type from it. Monads use a kind of parametric polymorphism.)

In Haskell notation, the monad interface is written

class Monad m where
  return :: a -> m a
  (>>=) :: forall a b . m a -> (a -> m b) -> m b

These operations are supposed to obey certain \"laws\", but that\'s not terrifically important: the \"laws\" just codify the way sensible implementations of the operations ought to behave (basically, that >>= and return ought to agree about how values get transformed into monad instances and that >>= is associative).

Monads are not just about state and I/O: they abstract a common pattern of computation that includes working with state, I/O, exceptions, and non-determinism. Probably the simplest monads to understand are lists and option types:

instance Monad [ ] where
    []     >>= k = []
    (x:xs) >>= k = k x ++ (xs >>= k)
    return x     = [x]

instance Monad Maybe where
    Just x  >>= k = k x
    Nothing >>= k = Nothing
    return x      = Just x

where [] and : are the list constructors, ++ is the concatenation operator, and Just and Nothing are the Maybe constructors. Both of these monads encapsulate common and useful patterns of computation on their respective data types (note that neither has anything to do with side effects or I/O).

You really have to play around writing some non-trivial Haskell code to appreciate what monads are about and why they are useful.



回答6:

You should first understand what a functor is. Before that, understand higher-order functions.

A higher-order function is simply a function that takes a function as an argument.

A functor is any type construction T for which there exists a higher-order function, call it map, that transforms a function of type a -> b (given any two types a and b) into a function T a -> T b. This map function must also obey the laws of identity and composition such that the following expressions return true for all p and q (Haskell notation):

map id = id
map (p . q) = map p . map q

For example, a type constructor called List is a functor if it comes equipped with a function of type (a -> b) -> List a -> List b which obeys the laws above. The only practical implementation is obvious. The resulting List a -> List b function iterates over the given list, calling the (a -> b) function for each element, and returns the list of the results.

A monad is essentially just a functor T with two extra methods, join, of type T (T a) -> T a, and unit (sometimes called return, fork, or pure) of type a -> T a. For lists in Haskell:

join :: [[a]] -> [a]
pure :: a -> [a]

Why is that useful? Because you could, for example, map over a list with a function that returns a list. Join takes the resulting list of lists and concatenates them. List is a monad because this is possible.

You can write a function that does map, then join. This function is called bind, or flatMap, or (>>=), or (=<<). This is normally how a monad instance is given in Haskell.

A monad has to satisfy certain laws, namely that join must be associative. This means that if you have a value x of type [[[a]]] then join (join x) should equal join (map join x). And pure must be an identity for join such that join (pure x) == x.



回答7:

[Disclaimer: I am still trying to fully grok monads. The following is just what I have understood so far. If it’s wrong, hopefully someone knowledgeable will call me on the carpet.]

Arnar wrote:

Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.

That’s precisely it. The idea goes like this:

  1. You take some kind of value and wrap it with some additional information. Just like the value is of a certain kind (eg. an integer or a string), so the additional information is of a certain kind.

    E.g., that extra information might be a Maybe or an IO.

  2. Then you have some operators that allow you to operate on the wrapped data while carrying along that additional information. These operators use the additional information to decide how to change the behaviour of the operation on the wrapped value.

    E.g., a Maybe Int can be a Just Int or Nothing. Now, if you add a Maybe Int to a Maybe Int, the operator will check to see if they are both Just Ints inside, and if so, will unwrap the Ints, pass them the addition operator, re-wrap the resulting Int into a new Just Int (which is a valid Maybe Int), and thus return a Maybe Int. But if one of them was a Nothing inside, this operator will just immediately return Nothing, which again is a valid Maybe Int. That way, you can pretend that your Maybe Ints are just normal numbers and perform regular math on them. If you were to get a Nothing, your equations will still produce the right result – without you having to litter checks for Nothing everywhere.

But the example is just what happens for Maybe. If the extra information was an IO, then that special operator defined for IOs would be called instead, and it could do something totally different before performing the addition. (OK, adding two IO Ints together is probably nonsensical – I’m not sure yet.) (Also, if you paid attention to the Maybe example, you have noticed that “wrapping a value with extra stuff” is not always correct. But it’s hard to be exact, correct and precise without being inscrutable.)

Basically, “monad” roughly means “pattern”. But instead of a book full of informally explained and specifically named Patterns, you now have a language construct – syntax and all – that allows you to declare new patterns as things in your program. (The imprecision here is all the patterns have to follow a particular form, so a monad is not quite as generic as a pattern. But I think that’s the closest term that most people know and understand.)

And that is why people find monads so confusing: because they are such a generic concept. To ask what makes something a monad is similarly vague as to ask what makes something a pattern.

But think of the implications of having syntactic support in the language for the idea of a pattern: instead of having to read the Gang of Four book and memorise the construction of a particular pattern, you just write code that implements this pattern in an agnostic, generic way once and then you are done! You can then reuse this pattern, like Visitor or Strategy or Façade or whatever, just by decorating the operations in your code with it, without having to re-implement it over and over!

So that is why people who understand monads find them so useful: it’s not some ivory tower concept that intellectual snobs pride themselves on understanding (OK, that too of course, teehee), but actually makes code simpler.



回答8:

After much striving, I think I finally understand the monad. After rereading my own lengthy critique of the overwhelmingly top voted answer, I will offer this explanation.

There are three questions that need to be answered to understand monads:

  1. Why do you need a monad?
  2. What is a monad?
  3. How is a monad implemented?

As I noted in my original comments, too many monad explanations get caught up in question number 3, without, and before really adequately covering question 2, or question 1.

Why do you need a monad?

Pure functional languages like Haskell are different from imperative languages like C, or Java in that, a pure functional program is not necessarily executed in a specific order, one step at a time. A Haskell program is more akin to a mathematical function, in which you may solve the \"equation\" in any number of potential orders. This confers a number of benefits, among which is that it eliminates the possibility of certain kinds of bugs, particularly those relating to things like \"state\".

However, there are certain problems that are not so straightforward to solve with this style of programming. Some things, like console programming, and file i/o, need things to happen in a particular order, or need to maintain state. One way to deal with this problem is to create a kind of object that represents the state of a computation, and a series of functions that take a state object as input, and return a new modified state object.

So let\'s create a hypothetical \"state\" value, that represents the state of a console screen. exactly how this value is constructed is not important, but let\'s say it\'s an array of byte length ascii characters that represents what is currently visible on the screen, and an array that represents the last line of input entered by the user, in pseudocode. We\'ve defined some functions that take console state, modify it, and return a new console state.

consolestate MyConsole = new consolestate;

So to do console programming, but in a pure functional manner, you would need to nest a lot of function calls inside eachother.

consolestate FinalConsole = print(input(print(myconsole, \"Hello, what\'s your name?\")),\"hello, %inputbuffer%!\");

Programming in this way keeps the \"pure\" functional style, while forcing changes to the console to happen in a particular order. But, we\'ll probably want to do more than just a few operations at a time like in the above example. Nesting functions in that way will start to become ungainly. What we want, is code that does essentially the same thing as above, but is written a bit more like this:

consolestate FinalConsole = myconsole:
                            print(\"Hello, what\'s your name?\"):
                            input():
                            print(\"hello, %inputbuffer%!\");

This would indeed be a more convenient way to write it. How do we do that though?

What is a monad?

Once you have a type (such as consolestate) that you define along with a bunch of functions designed specifically to operate on that type, you can turn the whole package of these things into a \"monad\" by defining an operator like : (bind) that automatically feeds return values on its left, into function parameters on its right, and a lift operator that turns normal functions, into functions that work with that specific kind of bind operator.

How is a monad implemented?

See other answers, that seem quite free to jump into the details of that.



回答9:

(See also the answers at What is a monad?)

A good motivation to Monads is sigfpe (Dan Piponi)\'s You Could Have Invented Monads! (And Maybe You Already Have). There are a LOT of other monad tutorials, many of which misguidedly try to explain monads in \"simple terms\" using various analogies: this is the monad tutorial fallacy; avoid them.

As DR MacIver says in Tell us why your language sucks:

So, things I hate about Haskell:

Let’s start with the obvious. Monad tutorials. No, not monads. Specifically the tutorials. They’re endless, overblown and dear god are they tedious. Further, I’ve never seen any convincing evidence that they actually help. Read the class definition, write some code, get over the scary name.

You say you understand the Maybe monad? Good, you\'re on your way. Just start using other monads and sooner or later you\'ll understand what monads are in general.

[If you are mathematically oriented, you might want to ignore the dozens of tutorials and learn the definition, or follow lectures in category theory :) The main part of the definition is that a Monad M involves a \"type constructor\" that defines for each existing type \"T\" a new type \"M T\", and some ways for going back and forth between \"regular\" types and \"M\" types.]

Also, surprisingly enough, one of the best introductions to monads is actually one of the early academic papers introducing monads, Philip Wadler\'s Monads for functional programming. It actually has practical, non-trivial motivating examples, unlike many of the artificial tutorials out there.



回答10:

I wrote this mostly for me but I hope others find it useful :)

I believe this explanation is more correct. However, I think this treatment is still valuable and will contemplate incorporating it at a later time. Suffice it to say, where conventional function composition deals with functions plain values, Monads are about composing functions that operate on function values (higher order functions). When you are dealing with higher order functions (functions that accepts or return functions), the composition must be customized or tailor made so as to evaluate the operands when the composition is evaluated. This evaluation process can be exotic such as collecting the results of asynchronous processes. Nonetheless, this tailoring can be made to follow a pattern. A version of that pattern is called the Monad and follows very much Algebraic addition. In particular, with respect to the following content such higher order functions would be regarded as the mathematical operators in the expression accepting as operand other partially applied operators and so the functions, 1+ 2*, 3/, and 7+ in 1+ ( 2* ( 3/ ( 7+ (..) ) ) )...

Monads address a problem which also shows up in arithmetic as division by zero, DivByZero. Specifically, calculations involving division must detect or allow for a DivByZero exception. This requirement makes coding such expressions in the general case messy.

The Monadic solution is to embrace DivByZero by doing the following

  1. Expand the Number type to include DivByZero as a specific value that is not a regular number: NaN, Infinity, or Null. Let\'s call this new number type, Nullable<Number>.
  2. Provide a function for \"lifting\" or wrapping an existing Number into a Nullable<Number> type (the idea of \"wrapping\" is that the content Number or value can be \"unwrapped\" without information loss)
  3. Provide a function for \"lifting\" or wrapping existing operators on Number into a versions that operates on Nullable<Number>. Such a resultant \"lifted\" operator might merely do the following:
    1. unwrap provided Nullable<Number> operands and apply its contained Number operator on them then \"lift\" the resulting Number result into a Nullable<Number>
    2. detect a DivByZero operand or exception during evaluation and by pass further evaluation, producing a DivByZero value as the result to assert that (1 + Null = Null). However, what actions to take depends on the programmer. In general, these wrapper functions are where a lot of the functionality of Monads are written. The monadic state information is maintained within the wrapper type itself from where the wrapped functions inspect and, per functional programming immutability approach, construct a new monadic value. In the case of Nullable<Number>, such a monadic state information would describe whether DivByZero or an actual Number exists.

So, a Monad is an expanded type together with a function that \"wraps\" the original type into this expanded version and another function that wraps the original operator(s) so they can handle this new expanded type. (Monads may have been a motivation for generics or type-parameters.)

It turns out that instead of merely smoothing out the handling of DivByZero (or Infinity if you will), the Monad treatment is broadly applicable to situations that can benefit from type expansion to simplify their coding. In fact, this applicability seems to be wide.

For example, the IO Monad is a type that represents the universe, literally. The intent is to recognize that the values returned by the prototypical HelloWorld program is not completely described by the result type of string and its value \"Hello World!\". In fact, such a result also includes modifications to hardware and memory states of devices such as the console. For instance, after execution the console is now displaying additional text, the cursor is on a new line, and so forth. The IO Monad is merely an explicit recognition of such external effects or side effects, if you will.

Why bother?

Monads allow for strictly stateless algorithms to be devised and documenting state-full ones. State-full machines are complex. For example, a machine with only 10 bits may be in 2^10 possible states. Eliminating superfluous complexity is the ideal of functional languages.

Variables hold state. Eliminating \"variables\" should simply stuff. Purely functional programs don\'t handle variables, only values (despite usage of the term \'variable\' in the Haskell documentation) and instead use labels or symbols or names for such values, as needed. Consequently, the closest thing to a variable in a purely functional language is the parameters received by a function as they accept new values on each invocation. (A label refers to a value whereas a variable refers to the place where a value is held. Consequently, you can modify the content of a variable but a label is the content itself. Ultimate it is better to be given an apple than a bag with an apple possibly in it.)

The absence of variables is why purely functional languages use recursion instead of loops to iterate. The act of incrementing a counter involves the use of a variable that becomes incremented and all the uncertainty with how it gets updated, when it gets tested, what value it should be and when, and then the complexity when you have multiple threads potentially accessing that same variable.

Nevertheless, So what?

Without the presence of state, a function must become a declaration or a definition of it\'s results, as oppose to a matriculation of some underlying state towards a result. Essentially, the functional expression of incFun(x) = x + 1 is simpler than the imperative expression of incImp(x) = x.add(1); return x; Here, incFun does not modify x but creates a new value. incFun may even be replaced by its definition within expressions as in 1 + incFun(x) becoming 1 + (x + 1). On the other hand, incImp modifies the state of x. Whatever such modification means for x can be unclear and ultimately can be impossible to determine without executing the program in addition to any concurrency issues.

Such complexity gets cognitively expensive over time (2^N). In contrast, the operator, +, cannot modify x but must instead construct a new value whose result is limited to and fully determined by the values x and 1 and the definition of +. In particular, the 2^N complexity explosion is avoided. Additionally, to emphasize concurrency, incImp, unlike incFun, cannot be invoked concurrently without precautions around the sharing of the parameter since it becomes modified by each invocation.

Why call it a Monad?

A monad is characterized by a mathematical structure called a Monoid from Algebraic group theory. With that said, all it means is that a Monoid has the following three properties:

  1. has a binary operator, *, such that x * y = z for x, y, and z belonging to some type S. For example 1 ÷ 2 = 0.5 where 1, 2, and 0.5 are all of type Number. Closed
  2. has an identity element, i, associated with the binary operator that does nothing such that (i * x) = (x * i) = x. For example the numeric operator, +, and the number, 0, in 4 + 0 = 0 + 4 = 4. Identity
  3. the order of evaluation of \"segments\" is irrelevant: (x * y) * z = x * (y * z). For example the numeric operator, +, in (3 + 4) + 12 = 3 + (4 + 12) = 19. Note, however, that the sequencing of terms must not change. Associativity

Property three (Associativity) allows expressions of arbitrary lengths to be evaluated by delineating them into segments and evaluating each segment independently such as in parallel. For example, x1*x2*...*xN may be segmented into (x1..xJ) * (xJ+1...xM) * (xM+1...xN). The separate result, x0J * xJM * xMN, may then be collected and further evaluated similarly. Support for segmentation like this is a key technique ensuring correct concurrency and distributed evaluation as used by Google\'s distributed search algorithms (a la map/reduce).

Property two (Identity), allows for greater ease in constructing expressions in various ways though it may not be entirely obvious; however, in the same way that zero was not obviously necessary to early counting systems it is useful as a concept of empty as in to wrap an empty value. Note that in the type, Nullable<Number>, Null is not an empty value but rather DivByZero. Specifically, nn + DivByZero = DivByZero whereas nn + 0 = 0 + nn = nn, hence 0 remains the identity under +, where nn is any Nullable<Number>.

Finally, there is a reason we don`t use Roman Numerals anymore...no expanded accommodation for zero or fractions, irrational numbers, negative numbers, imaginary numbers,...yeah, it seems our number system can be considered a monad.



回答11:

A monad is, effectively, a form of \"type operator\". It will do three things. First it will \"wrap\" (or otherwise convert) a value of one type into another type (typically called a \"monadic type\"). Secondly it will make all the operations (or functions) available on the underlying type available on the monadic type. Finally it will provide support for combining its self with another monad to produce a composite monad.

The \"maybe monad\" is essentially the equivalent of \"nullable types\" in Visual Basic / C#. It takes a non nullable type \"T\" and converts it into a \"Nullable<T>\", and then defines what all the binary operators mean on a Nullable<T>.

Side effects are represented simillarly. A structure is created that holds descriptions of side effects alongside a function\'s return value. The \"lifted\" operations then copy around side effects as values are passed between functions.

They are called \"monads\" rather than the easier-to-grasp name of \"type operators\" for several reasons:

  1. Monads have restrictions on what they can do (see the definiton for details).
  2. Those restrictions, along with the fact that there are three operations involved, conform to the structure of something called a monad in Category Theory, which is an obscure branch of mathematics.
  3. They were designed by proponents of \"pure\" functional languages
  4. Proponents of pure functional languages like obscure branches of mathematics
  5. Because the math is obscure, and monads are associated with particular styles of programming, people tend to use the word monad as a sort of secret handshake. Because of this no one has bothered to invest in a better name.


回答12:

Monads are to control flow what abstract data types are to data.

In other words, many developers are comfortable with the idea of Sets, Lists, Dictionaries (or Hashes, or Maps), and Trees. Within those data types there are many special cases (for instance InsertionOrderPreservingIdentityHashMap).

However, when confronted with program \"flow\" many developers haven\'t been exposed to many more constructs than if, switch/case, do, while, goto (grr), and (maybe) closures.

So, a monad is simply a control flow construct. A better phrase to replace monad would be \'control type\'.

As such, a monad has slots for control logic, or statements, or functions - the equivalent in data structures would be to say that some data structures allow you to add data, and remove it.

For example, the \"if\" monad:

if( clause ) then block

at its simplest has two slots - a clause, and a block. The if monad is usually built to evaluate the result of the clause, and if not false, evaluate the block. Many developers are not introduced to monads when they learn \'if\', and it just isn\'t necessary to understand monads to write effective logic.

Monads can become more complicated, in the same way that data structures can become more complicated, but there are many broad categories of monad that may have similar semantics, but differing implementations and syntax.

Of course, in the same way that data structures may be iterated over, or traversed, monads may be evaluated.

Compilers may or may not have support for user-defined monads. Haskell certainly does. Ioke has some similar capabilities, although the term monad is not used in the language.



回答13:

My favorite Monad tutorial:

http://www.haskell.org/haskellwiki/All_About_Monads

(out of 170,000 hits on a Google search for \"monad tutorial\"!)

@Stu: The point of monads is to allow you to add (usually) sequential semantics to otherwise pure code; you can even compose monads (using Monad Transformers) and get more interesting and complicated combined semantics, like parsing with error handling, shared state, and logging, for example. All of this is possible in pure code, monads just allow you to abstract it away and reuse it in modular libraries (always good in programming), as well as providing convenient syntax to make it look imperative.

Haskell already has operator overloading[1]: it uses type classes much the way one might use interfaces in Java or C# but Haskell just happens to also allow non-alphanumeric tokens like + && and > as infix identifiers. It\'s only operator overloading in your way of looking at it if you mean \"overloading the semicolon\" [2]. It sounds like black magic and asking for trouble to \"overload the semicolon\" (picture enterprising Perl hackers getting wind of this idea) but the point is that without monads there is no semicolon, since purely functional code does not require or allow explicit sequencing.

This all sounds much more complicated than it needs to. sigfpe\'s article is pretty cool but uses Haskell to explain it, which sort of fails to break the chicken and egg problem of understanding Haskell to grok Monads and understanding Monads to grok Haskell.

[1] This is a separate issue from monads but monads use Haskell\'s operator overloading feature.

[2] This is also an oversimplification since the operator for chaining monadic actions is >>= (pronounced \"bind\") but there is syntactic sugar (\"do\") that lets you use braces and semicolons and/or indentation and newlines.



回答14:

I\'ve been thinking of Monads in a different way, lately. I\'ve been thinking of them as abstracting out execution order in a mathematical way, which makes new kinds of polymorphism possible.

If you\'re using an imperative language, and you write some expressions in order, the code ALWAYS runs exactly in that order.

And in the simple case, when you use a monad, it feels the same -- you define a list of expressions that happen in order. Except that, depending on which monad you use, your code might run in order (like in IO monad), in parallel over several items at once (like in the List monad), it might halt partway through (like in the Maybe monad), it might pause partway through to be resumed later (like in a Resumption monad), it might rewind and start from the beginning (like in a Transaction monad), or it might rewind partway to try other options (like in a Logic monad).

And because monads are polymorphic, it\'s possible to run the same code in different monads, depending on your needs.

Plus, in some cases, it\'s possible to combine monads together (with monad transformers) to get multiple features at the same time.



回答15:

I am still new to monads, but I thought I would share a link I found that felt really good to read (WITH PICTURES!!): http://www.matusiak.eu/numerodix/blog/2012/3/11/monads-for-the-layman/ (no affiliation)

Basically, the warm and fuzzy concept that I got from the article was the concept that monads are basically adapters that allow disparate functions to work in a composable fashion, i.e. be able to string up multiple functions and mix and match them without worrying about inconsistent return types and such. So the BIND function is in charge of keeping apples with apples and oranges with oranges when we\'re trying to make these adapters. And the LIFT function is in charge of taking \"lower level\" functions and \"upgrading\" them to work with BIND functions and be composable as well.

I hope I got it right, and more importantly, hope that the article has a valid view on monads. If nothing else, this article helped whet my appetite for learning more about monads.



回答16:

In addition to the excellent answers above, let me offer you a link to the following article (by Patrick Thomson) which explains monads by relating the concept to the JavaScript library jQuery (and its way of using \"method chaining\" to manipulate the DOM): jQuery is a Monad

The jQuery documentation itself doesn\'t refer to the term \"monad\" but talks about the \"builder pattern\" which is probably more familiar. This doesn\'t change the fact that you have a proper monad there maybe without even realizing it.



回答17:

Monads Are Not Metaphors, but a practically useful abstraction emerging from a common pattern, as Daniel Spiewak explains.



回答18:

A monad is a way of combining computations together that share a common context. It is like building a network of pipes. When constructing the network, there is no data flowing through it. But when I have finished piecing all the bits together with \'bind\' and \'return\' then I invoke something like runMyMonad monad data and the data flows through the pipes.



回答19:

The two things that helped me best when learning about there were:

Chapter 8, \"Functional Parsers,\" from Graham Hutton\'s book Programming in Haskell. This doesn\'t mention monads at all, actually, but if you can work through chapter and really understand everything in it, particularly how a sequence of bind operations is evaluated, you\'ll understand the internals of monads. Expect this to take several tries.

The tutorial All About Monads. This gives several good examples of their use, and I have to say that the analogy in Appendex I worked for me.



回答20:

Monoid appears to be something that ensures that all operations defined on a Monoid and a supported type will always return a supported type inside the Monoid. Eg, Any number + Any number = A number, no errors.

Whereas division accepts two fractionals, and returns a fractional, which defined division by zero as Infinity in haskell somewhy(which happens to be a fractional somewhy)...

In any case, it appears Monads are just a way to ensure that your chain of operations behaves in a predictable way, and a function that claims to be Num -> Num, composed with another function of Num->Num called with x does not say, fire the missiles.

On the other hand, if we have a function which does fire the missiles, we can compose it with other functions which also fire the missiles, because our intent is clear -- we want to fire the missiles -- but it won\'t try printing \"Hello World\" for some odd reason.

In Haskell, main is of type IO (), or IO [()], the distiction is strange and I will not discuss it but here\'s what I think happens:

If I have main, I want it to do a chain of actions, the reason I run the program is to produce an effect -- usually though IO. Thus I can chain IO operations together in main in order to -- do IO, nothing else.

If I try to do something which does not \"return IO\", the program will complain that the chain does not flow, or basically \"How does this relate to what we are trying to do -- an IO action\", it appears to force the programmer to keep their train of thought, without straying off and thinking about firing the missiles, while creating algorithms for sorting -- which does not flow.

Basically, Monads appear to be a tip to the compiler that \"hey, you know this function that returns a number here, it doesn\'t actually always work, it can sometimes produce a Number, and sometimes Nothing at all, just keep this in mind\". Knowing this, if you try to assert a monadic action, the monadic action may act as a compile time exception saying \"hey, this isn\'t actually a number, this CAN be a number, but you can\'t assume this, do something to ensure that the flow is acceptable.\" which prevents unpredictable program behavior -- to a fair extent.

It appears monads are not about purity, nor control, but about maintaining an identity of a category on which all behavior is predictable and defined, or does not compile. You cannot do nothing when you are expected to do something, and you cannot do something if you are expected to do nothing (visible).

The biggest reason I could think of for Monads is -- go look at Procedural/OOP code, and you will notice that you do not know where the program starts, nor ends, all you see is a lot of jumping and a lot of math,magic,and missiles. You will not be able to maintain it, and if you can, you will spend quite a lot of time wrapping your mind around the whole program before you can understand any part of it, because modularity in this context is based on interdependant \"sections\" of code, where code is optimized to be as related as possible for promise of efficiency/inter-relation. Monads are very concrete, and well defined by definition, and ensure that the flow of program is possible to analyze, and isolate parts which are hard to analyze -- as they themselves are monads. A monad appears to be a \"comprehensible unit which is predictable upon its full understanding\" -- If you understand \"Maybe\" monad, there\'s no possible way it will do anything except be \"Maybe\", which appears trivial, but in most non monadic code, a simple function \"helloworld\" can fire the missiles, do nothing, or destroy the universe or even distort time -- we have no idea nor have any guarantees that IT IS WHAT IT IS. A monad GUARANTEES that IT IS WHAT IT IS. which is very powerful.

All things in \"real world\" appear to be monads, in the sense that it is bound by definite observable laws preventing confusion. This does not mean we have to mimic all the operations of this object to create classes, instead we can simply say \"a square is a square\", nothing but a square, not even a rectangle nor a circle, and \"a square has area of the length of one of it\'s existing dimensions multiplied by itself. No matter what square you have, if it\'s a square in 2D space, it\'s area absolutely cannot be anything but its length squared, it\'s almost trivial to prove. This is very powerful because we do not need to make assertions to make sure that our world is the way it is, we just use implications of reality to prevent our programs from falling off track.

Im pretty much guaranteed to be wrong but I think this could help somebody out there, so hopefully it helps somebody.



回答21:

In the context of Scala you will find the following to be the simplest definition. Basically flatMap (or bind) is \'associative\' and there exists an identity.

trait M[+A] {
  def flatMap[B](f: A => M[B]): M[B] // AKA bind

  // Pseudo Meta Code
  def isValidMonad: Boolean = {
    // for every parameter the following holds
    def isAssociativeOn[X, Y, Z](x: M[X], f: X => M[Y], g: Y => M[Z]): Boolean =
      x.flatMap(f).flatMap(g) == x.flatMap(f(_).flatMap(g))

    // for every parameter X and x, there exists an id
    // such that the following holds
    def isAnIdentity[X](x: M[X], id: X => M[X]): Boolean =
      x.flatMap(id) == x
  }
}

E.g.

// These could be any functions
val f: Int => Option[String] = number => if (number == 7) Some(\"hello\") else None
val g: String => Option[Double] = string => Some(3.14)

// Observe these are identical. Since Option is a Monad 
// they will always be identical no matter what the functions are
scala> Some(7).flatMap(f).flatMap(g)
res211: Option[Double] = Some(3.14)

scala> Some(7).flatMap(f(_).flatMap(g))
res212: Option[Double] = Some(3.14)


// As Option is a Monad, there exists an identity:
val id: Int => Option[Int] = x => Some(x)

// Observe these are identical
scala> Some(7).flatMap(id)
res213: Option[Int] = Some(7)

scala> Some(7)
res214: Some[Int] = Some(7)

NOTE Strictly speaking the definition of a Monad in functional programming is not the same as the definition of a Monad in Category Theory, which is defined in turns of map and flatten. Though they are kind of equivalent under certain mappings. This presentations is very good: http://www.slideshare.net/samthemonad/monad-presentation-scala-as-a-category



回答22:

In practice, monad is a custom implementation of function composition operator that takes care of side effects and incompatible input and return values (for chaining).



回答23:

This answer begins with a motivating example, works through the example, derives an example of a monad, and formally defines \"monad\".

Consider these three functions in pseudocode:

f(<x, messages>) := <x, messages \"called f. \">
g(<x, messages>) := <x, messages \"called g. \">
wrap(x)          := <x, \"\">

f takes an ordered pair of the form <x, messages> and returns an ordered pair. It leaves the first item untouched and appends \"called f. \" to the second item. Same with g.

You can compose these functions and get your original value, along with a string that shows which order the functions were called in:

  f(g(wrap(x)))
= f(g(<x, \"\">))
= f(<x, \"called g. \">)
= <x, \"called g. called f. \">

You dislike the fact that f and g are responsible for appending their own log messages to the previous logging information. (Just imagine for the sake of argument that instead of appending strings, f and g must perform complicated logic on the second item of the pair. It would be a pain to repeat that complicated logic in two -- or more -- different functions.)

You prefer to write simpler functions:

f(x)    := <x, \"called f. \">
g(x)    := <x, \"called g. \">
wrap(x) := <x, \"\">

But look at what happens when you compose them:

  f(g(wrap(x)))
= f(g(<x, \"\">))
= f(<<x, \"\">, \"called g. \">)
= <<<x, \"\">, \"called g. \">, \"called f. \">

The problem is that passing a pair into a function does not give you what you want. But what if you could feed a pair into a function:

  feed(f, feed(g, wrap(x)))
= feed(f, feed(g, <x, \"\">))
= feed(f, <x, \"called g. \">)
= <x, \"called g. called f. \">

Read feed(f, m) as \"feed m into f\". To feed a pair <x, messages> into a function f is to pass x into f, get <y, message> out of f, and return <y, messages message>.

feed(f, <x, messages>) := let <y, message> = f(x)
                          in  <y, messages message>

Notice what happens when you do three things with your functions:

First: if you wrap a value and then feed the resulting pair into a function:

  feed(f, wrap(x))
= feed(f, <x, \"\">)
= let <y, message> = f(x)
  in  <y, \"\" message>
= let <y, message> = <x, \"called f. \">
  in  <y, \"\" message>
= <x, \"\" \"called f. \">
= <x, \"called f. \">
= f(x)

That is the same as passing the value into the function.

Second: if you feed a pair into wrap:

  feed(wrap, <x, messages>)
= let <y, message> = wrap(x)
  in  <y, messages message>
= let <y, message> = <x, \"\">
  in  <y, messages message>
= <x, messages \"\">
= <x, messages>

That does not change the pair.

Third: if you define a function that takes x and feeds g(x) into f:

h(x) := feed(f, g(x))

and feed a pair into it:

  feed(h, <x, messages>)
= let <y, message> = h(x)
  in  <y, messages message>
= let <y, message> = feed(f, g(x))
  in  <y, messages message>
= let <y, message> = feed(f, <x, \"called g. \">)
  in  <y, messages message>
= let <y, message> = let <z, msg> = f(x)
                     in  <z, \"called g. \" msg>
  in <y, messages message>
= let <y, message> = let <z, msg> = <x, \"called f. \">
                     in  <z, \"called g. \" msg>
  in <y, messages message>
= let <y, message> = <x, \"called g. \" \"called f. \">
  in <y, messages message>
= <x, messages \"called g. \" \"called f. \">
= feed(f, <x, messages \"called g. \">)
= feed(f, feed(g, <x, messages>))

That is the same as feeding the pair into g and feeding the resulting pair into f.

You have most of a monad. Now you just need to know about the data types in your program.

What type of value is <x, \"called f. \">? Well, that depends on what type of value x is. If x is of type t, then your pair is a value of type \"pair of t and string\". Call that type M t.

M is a type constructor: M alone does not refer to a type, but M _ refers to a type once you fill in the blank with a type. An M int is a pair of an int and a string. An M string is a pair of a string and a string. Etc.

Congratulations, you have created a monad!

Formally, your monad is the tuple <M, feed, wrap>.

A monad is a tuple <M, feed, wrap> where:

  • M is a type constructor.
  • feed takes a (function that takes a t and returns an M u) and an M t and returns an M u.
  • wrap takes a v and returns an M v.

t, u, and v are any three types that may or may not be the same. A monad satisfies the three properties you proved for your specific monad:

  • Feeding a wrapped t into a function is the same as passing the unwrapped t into the function.

    Formally: feed(f, wrap(x)) = f(x)

  • Feeding an M t into wrap does nothing to the M t.

    Formally: feed(wrap, m) = m

  • Feeding an M t (call it m) into a function that

    • passes the t into g
    • gets an M u (call it n) from g
    • feeds n into f

    is the same as

    • feeding m into g
    • getting n from g
    • feeding