What other ways can state be handled in a pure fun

2019-01-30 04:31发布

问题:

So I started to wrap my head around Monads (used in Haskell). I'm curious what other ways IO or state can be handled in a pure functional language (both in theory or reality). For example, there is a logical language called "mercury" that uses "effect-typing". In a program such as haskell, how would effect-typing work? How does other systems work?

回答1:

There are several different questions involved here.

First, IO and State are very different things. State is easy to do yourself: Just pass an extra argument to every function, and return an extra result, and you have a "stateful function"; for example, turn a -> b into a -> s -> (b,s).

There's no magic involved here: Control.Monad.State provides a wrapper that makes working with "state actions" of the form s -> (a,s) convenient, as well as a bunch of helper functions, but that's it.

I/O, by its nature, has to have some magic in its implementation. But there are a lot of ways of expressing I/O in Haskell that don't involve the word "monad". If we had an IO-free subset of Haskell as-is, and we wanted to invent IO from scratch, without knowing anything about monads, there are many things we might do.

For example, if all we want to do is print to stdout, we might say:

type PrintOnlyIO = String

main :: PrintOnlyIO
main = "Hello world!"

And then have an RTS (runtime system) which evaluates the string and prints it. This lets us write any Haskell program whose I/O consists entirely of printing to stdout.

This isn't very useful, however, because we want interactivity! So let's invent a new type of IO which allows for it. The simplest thing that comes to mind is

type InteractIO = String -> String

main :: InteractIO
main = map toUpper

This approach to IO lets us write any code which reads from stdin and writes to stdout (the Prelude comes with a function interact :: InteractIO -> IO () which does this, by the way).

This is much better, since it lets us write interactive programs. But it's still very limited compared to all the IO we want to do, and also quite error-prone (if we accidentally try to read too far into stdin, the program will just block until the user types more in).

We want to be able to do more than read stdin and write stdout. Here's how early versions of Haskell did I/O, approximately:

data Request = PutStrLn String | GetLine | Exit | ...
data Response = Success | Str String | ...
type DialogueIO = [Response] -> [Request]

main :: DialogueIO
main resps1 =
    PutStrLn "what's your name?"
  : GetLine
  : case resps1 of
        Success : Str name : resps2 ->
            PutStrLn ("hi " ++ name ++ "!")
          : Exit

When we write main, we get a lazy list argument and return a lazy list as a result. The lazy list we return has values like PutStrLn s and GetLine; after we yield a (request) value, we can examine the next element of the (response) list, and the RTS will arrange for it to be the response to our request.

There are ways to make working with this mechanism nicer, but as you can imagine, the approach gets pretty awkward pretty quickly. Also, it's error-prone in the same way as the previous one.

Here's another approach which is much less error-prone, and conceptually very close to how Haskell IO actually behaves:

data ContIO = Exit | PutStrLn String ContIO | GetLine (String -> ContIO) | ...

main :: ContIO
main =
    PutStrLn "what's your name?" $
    GetLine $ \name ->
    PutStrLn ("hi " ++ name ++ "!") $
    Exit

The key is that instead of taking a "lazy list" of responses as one big argument at he beginning of main, we make individual requests that accept one argument at a time.

Our program is now just a regular data type -- a lot like a linked list, except you can't just traverse it normally: When the RTS interprets main, sometimes it encounters a value like GetLine which holds a function; then it has to get a string from stdin using RTS magic, and pass that string to the function, before it can continue. Exercise: Write interpret :: ContIO -> IO ().

Note that none of these implementations involve "world-passing". "world-passing" isn't really how I/O works in Haskell. The actual implementation of the IO type in GHC involves an internal type called RealWorld, but that's only an implementation detail.

Actual Haskell IO adds a type parameter so we can write actions that "produce" arbitrary values -- so it looks more like data IO a = Done a | PutStr String (IO a) | GetLine (String -> IO a) | .... That gives us more flexibility, because we can create "IO actions" that produce arbitrary values.

(As Russell O'Connor points out, this type is just a free monad. We can write a Monad instance for it easily.)


Where do monads come into it, then? It turns out that we don't need Monad for I/O, and we don't need Monad for state, so why do we need it at all? The answer is that we don't. There's nothing magical about the type class Monad.

However, when we work with IO and State (and lists and functions and Maybe and parsers and continuation-passing style and ...) for long enough, we eventually figure out that they behave pretty similarly in some ways. We might write a function that prints every string in a list, and a function that runs every stateful computation in a list and threads the state through, and they'll look very similar to each other.

Since we don't like writing a lot of similar-looking code, we want a way to abstract it; Monad turns out to be a great abstraction, because it lets us abstract many types that seem very different, but still provide a lot of useful functionality (including everything in Control.Monad).

Given bindIO :: IO a -> (a -> IO b) -> IO b and returnIO :: a -> IO a, we could write any IO program in Haskell without ever thinking about monads. But we'd probably end up replicating a lot of the functions in Control.Monad, like mapM and forever and when and (>=>).

By implementing the common Monad API, we get to use the exact same code for working with IO actions as we do with parsers and lists. That's really the only reason we have the Monad class -- to capture the similarities between different types.



回答2:

Another major approach is uniqueness typing, as in Clean. The short story is that handles to state (including the real world) can only be used once, and functions that access mutable state return a new handle. This means that an output of the first call is an input of a second, forcing the sequential evaluation.

Effect typing is used in the Disciple Compiler for Haskell, but to the best of my knowledge it would take considerable compiler work to enable it in, say, GHC. I shall leave discussion of the details to those better-informed than myself.



回答3:

Well, first what is state? It can manifest as a mutable variable, which you don't have in Haskell. You only have memory references (IORef, MVar, Ptr, etc.) and IO/ST actions to act on them.

However, state itself can be pure as well. To acknowledge that review the 'Stream' type:

data Stream a = Stream a (Stream a)

This is a stream of values. However an alternative way to interpret this type is a changing value:

stepStream :: Stream a -> (a, Stream a)
stepStream (Stream x xs) = (x, xs)

This gets interesting when you allow two streams to communicate. You then get the automaton category Auto:

newtype Auto a b = Auto (a -> (b, Auto a b))

This is really like Stream, except that now at every instant the stream gets some input value of type a. This forms a category, so one instant of a stream can get its value from the same instant of another stream.

Again a different interpretation of this: You have two computations that change over time and you allow them to communicate. So every computation has local state. Here is a type that is isomorphic to Auto:

data LS a b =
    forall s.
    LS s ((a, s) -> (b, s))


回答4:

Take a look at A History of Haskell: Being Lazy With Class. It describes two different approaches to doing I/O in Haskell, before monads were invented: continuations and streams.



回答5:

There is an approach called Functional Reactive Programming that represents time-varying values and/or event streams as a first-class abstraction. A recent example that comes to my mind is Elm (it is written in Haskell and has a syntax similar to Haskell).



回答6:

It can't be (not if by "state" you mean "I/O or mutable variable behavior like in a procedural language"). In the first place, you have to understand where the use of monads for mutable variables or I/O comes from. Despite popular belief, monadic I/O doesn't come from languages like Haskell, but from languages like ML. Eugenio Moggi developed the original monads while studying the use of category theory for the denotational semantics of impure functional languages like ML. To see why, consider that a monad (in Haskell) can be categorized by three properties:

  • There is a distinction between values (in Haskell, of type a) and expressions (in Haskell, of type IO a).
  • Any value can be turned into an expression (in Haskell, by converting x to return x).
  • Any function over values (returning an expression) can be applied to an expression (in Haskell, by computing f =<< a).

These properties are obviously true of (at least) the denotational semantics of any impure functional language:

  • An expression, like print "Hello, world!\n", can have side-effects, but its value, such as (), cannot. So we need to make a distinction between the two cases in the denotational semantics.
  • A value, such as 3, can be used anywhere an expression is required. So our denotational semantics needs a function to turn a value into an expression.
  • A function takes values as arguments (the formal parameters to a function in a strict language don't have side-effects), but can be applied to an expression. So we need a way to apply an (expression-returning) function of values to an expression.

So any denotational semantics for an impure functional (or procedural) language is going to have the structure of a monad under the hood, even if that structure isn't explicitly used in describing how I/O works in the language.

What about purely functional languages?

There are four major ways of doing I/O in purely functional languages, that I know about (in practice) (again, restricting ourselves to procedural-style I/O; FRP is genuinely a different paradigm):

  • Monadic I/O
  • Continuations
  • Uniqueness / linear types
  • Dialogs

Monadic I/O is obvious. Continuation-based I/O looks like this:

main k = print "What is your name? " $
    getLine $ \ myName ->
    print ("Hello, " ++ myName ++ "\n") $
    k ()

Each I/O action takes a 'continuation', performs its action, and then tail calls (under the hood) the continuation. So in the above program:

  • print "What is your name? " runs, then
  • getLine runs, then
  • print ("Hello, " ++ myName ++ "\n") runs, then
  • k runs (which returns control to the OS).

The continuation monad is an obvious syntactic improvement to the above. More significantly, semantically, I can only see two ways to make the I/O actually work in the above:

  • Make the I/O actions (and continuations) return an "I/O type" describing the I/O you want to perform. Now you have an I/O monad (continuation monad-based) without the newtype wrapper.
  • Make the I/O actions (and continuations) return what is essentially () and do the I/O as a side-effect of calling the individual operations (e.g., print, getLine, etc.). But if evaluation of an expression in your language (which the right-hand side of the main definition above is) is side-effectful, I wouldn't consider that purely functional.

What about uniqueness/linear types? These use special 'token' values to represent the state of the world after each action, and enforce sequencing. The code looks like this:

main w0 = let
        w1 = print "What is your name? " w0
        (w2, myName) = getLine w1
        w3 = print $ "Hello, " ++ myName ++ "!\n"
    in w3

The difference between linear types and uniqueness types is that in linear types, the result has to be w3 (it has to be of type World), whereas in uniqueness types, the result could be something like w3 `seq` () instead. w3 just has to be evaluated for the I/O to happen.

Again, the state monad is an obvious syntactic improvement to the above. More significantly, semantically, you again have two choices:

  • Make the I/O operations, such as print and getLine, strict in the World argument (so the previous operation runs first, and side-effectful (so the I/O happens as a side-effect of evaluating them). Again, if you have side-effects of evaluation, in my opinion that's not really purely functional.
  • Make the World type actually represent the I/O that needs to be performed. This has the same problem as GHC's IO implementation with tail-recursive programs. Suppose we change the result of main to main w3. Now main tail-calls itself. Any function that tail-calls itself, in a purely functional language, has no value (is just an infinite loop); this is a basic fact about how the denotational semantics of recursion works in a pure language. Again, I wouldn't consider any language that broke that rule (especially for a 'special' data type like World) to be purely functional.

So, really, uniqueness or linear types a) produce programs that are clearer / cleaner if you wrap them in a state monad and b) aren't actually a way to do I/O in a purely functional language after all.

What about dialogs? This is the only way to do I/O (or, technically, mutable variables, although that's much harder) that truly is both purely functional and independent of monads. That looks something like this:

main resps = [
    PrintReq "What is your name? ",
    GetLineReq,
    PrintReq $ "Hello, " ++ myName ++ "!\n"
  ] where
    LineResp myName = resps !! 1

However, you'll notice a few disadvantages of this approach:

  • It's not clear how to incorporate I/O-performing procedures into this approach.
  • You have to use numeric or positional indexing to find the response corresponding to a given request, which is quite fragile.
  • There's no obvious way to scope a response just over the actions after it's received; if this program somehow used myName before issuing the corresponding getLine request, the compiler would accept your program but it would deadlock at runtime.

An easy way to solve all of these problems is to wrap dialogs in continuations, like this:

type Cont = [Response] -> [Request]
print :: String -> Cont -> Cont
print msg k resps = PrintReq msg : case resps of
    PrintResp () : resps1 -> k resps1
getLine :: (String -> Cont) -> Cont
getLine k resps = GetLineReq : case resps of
    GetLineResp msg : resps1 -> k msg resps1

The code now looks identical to the code for the continuation-passing approac to I/O given earlier. In fact, dialogs are an excellent result type for your continuations in a continuation-based I/O system, or even in a continuation monad-based monadic I/O system. However, by converting back to continuations, the same argument applies, so we see that, even if the run-time system uses dialogs internally, programs should still be written to do I/O in a monadic style.