The Idea
Hello! I'm trying to implement in Haskell an image processing library based on dataflow ideology. I've got a problem connected to how I want to handle the flow of control.
The main idea is to introduce a time
. The time
is a Float
, which could be accessed anywhere in the code (you can think of it like about State monad, but a little funnier). The funny thing about it, is that we can use timeShift
operation on results, affecting the time corresponding operations would see.
An example would be best to explain this situation. Lets use following dataflow diagram:
-- timeShift(*2) --
-- / \
-- readImage -- addImages -> out
-- \ /
-- blur ----------
and its pseudocode (which deos not typecheck - its not important if we use do or let notation here, the idea should be clear):
test = do
f <- frame
a <- readImage $ "test" + show f + ".jpg"
aBlur <- blur a
a' <- a.timeShift(*2)
out <- addImage aBlur a'
main = print =<< runStateT test 5
The 5
is the time
we want to run the test
function with. The timeShift
function affects all the operations on the left of it (in the dataflow diagram) - in this case the function readImage
would be run twice - for both branches - the lower one would use frame 5
and the upper one frame 5*2 = 10
.
The problem
I'm providing here a very simple implementation, that works great, but has some caveats I want to solve. The problem is, that I want to keep the order of all IO operations. Look at the bottom for example, which will clarify what I mean.
Sample implementation
Below is a sample implementation of the algorithm and a code, which constructs following dataflow graph:
-- A --- blur --- timeShift(*2) --
-- \
-- addImages -> out
-- /
-- B --- blur --------------------
the code:
import Control.Monad.State
-- for simplicity, lets assume an Image is just a String
type Image = String
imagesStr = ["a0","b1","c2","d3","e4","f5","g6","h7","i8","j9","k10","l11","m12","n13","o14","p15","q16","r17","s18","t19","u20","v21","w22","x23","y24","z25"]
images = "abcdefghjiklmnoprstuwxyz"
--------------------------------
-- Ordinary Image processing functions
blurImg' :: Image -> Image
blurImg' img = "(blur " ++ img ++ ")"
addImage' :: Image -> Image -> Image
addImage' img1 img2 = "(add " ++ img1 ++ " " ++ img2 ++ ")"
--------------------------------
-- Functions processing Images in States
readImage1 :: StateT Int IO Image
readImage1 = do
t <- get
liftIO . putStrLn $ "[1] reading image with time: " ++ show t
return $ imagesStr !! t
readImage2 :: StateT Int IO Image
readImage2 = do
t <- get
liftIO . putStrLn $ "[2] reading image with time: " ++ show t
return $ imagesStr !! t
blurImg :: StateT Int IO Image -> StateT Int IO Image
blurImg img = do
i <- img
liftIO $ putStrLn "blurring"
return $ blurImg' i
addImage :: StateT Int IO Image -> StateT Int IO Image -> StateT Int IO Image
addImage img1 img2 = do
i1 <- img1
i2 <- img2
liftIO $ putStrLn "adding images"
return $ addImage' i1 i2
timeShift :: StateT Int IO Image -> (Int -> Int) -> StateT Int IO Image
timeShift img f = do
t <- get
put (f t)
i <- img
put t
return i
test = out where
i1 = readImage1
j1 = readImage2
i2 = blurImg i1
j2 = blurImg j1
i3 = timeShift i2 (*2)
out = addImage i3 j2
main = do
print =<< runStateT test 5
print "end"
The output is:
[1] reading image with time: 10
blurring
[2] reading image with time: 5
blurring
adding images
("(add (blur k10) (blur f5))",5)
"end"
and should be:
[1] reading image with time: 10
[2] reading image with time: 5
blurring
blurring
adding images
("(add (blur k10) (blur f5))",5)
"end"
Please note that the correct output is ("(add (blur k10) (blur f5))",5)
- which means, that we added image k10
to f5
- from respectively 10th and 5th frame.
Further requirements
I'm looking for a solution, which would allow users to write simple code (like in test
function - it could of course be in a Monad), but I do not want them to handle the time-shifting logic by hand.
Conclusions
The only difference is the order of IO actions execution. I would love to preserve the order of the IO actions just like they are written in the test
function. I was trying to implement the idea using Countinuations
, Arrows
and some funny states, but without success.
Dataflow and functional reactive programming libraries in Haskell are usually written in terms of Applicative
or Arrow
. These are abstractions for computations that are less general than Monad
s - the Applicative
and Arrow
typeclasses do not expose a way for the structure of computations to depend on the results of other computations. As a result, libraries exposing only these typeclasses can reason about the structure of computations in the library independently of performing those computations. We will solve your problem in terms of the Applicative
typeclass
class Functor f => Applicative f where
-- | Lift a value.
pure :: a -> f a
-- | Sequential application.
(<*>) :: f (a -> b) -> f a -> f b
Applicative
allows a library user to make new computations with pure
, operate on existing computations with fmap
(from Functor
) and compose computations together with <*>
, using the result of one computation as an input for another. It does not allow a library user to make a computation that makes another computation and then use the result of that computation directly; there's no way a user can write join :: f (f a) -> f a
. This restriction will keep our library from running into the problem I described in my other answer.
Transformers, free, and the ApT transformer
Your example problem is quite involved, so we are going to pull out a bunch of high level Haskell tricks, and make a few new ones of our own. The first two tricks we are going to pull out are transformers and free data types. Transformers are types that take types with a kind like that of Functor
s, Applicative
s or Monad
s and produce new types with the same kind.
Transformers typically look like the following Double
example. Double
can take any Functor
or Applicative
or Monad
and make a version of it that always holds two values instead of one
newtype Double f a = Double {runDouble :: f (a, a)}
Free data types are transformers that do two things. First, given some simpler property of the underlying type the gain new exciting properties for the transformed type. The Free
Monad
provides a Monad
given any Functor
, and the free Applicative
, Ap
, makes an Applicative
out of any Functor
. The other thing "free" types do is they "free" the implementation of the interpreter as much as possible. Here are the types for the free Applicative
, Ap
, the free Monad
, Free
, and the free monad transfomer, FreeT
. The free monad transformer provides a monad transformer for "free" given a Functor
-- Free Applicative
data Ap f a where
Pure :: a -> Ap f a
Ap :: f a -> Ap f (a -> b) -> Ap f b
-- Base functor of the free monad transformer
data FreeF f a b
= Pure a
| Free (f b)
-- Free monad transformer
newtype FreeT f m a = FreeT {runFreeT :: m (FreeF f a (FreeT f m a)}
-- The free monad is the free monad transformer applied to the Identity monad
type Free f = FreeT f Identity
Here's a sketch of our goal - we want to provide an Applicative
interface for combining computations, which, at the bottom, allows Monad
ic computations. We want to "free" the interpreter as much as possible so that it can hopefully reorder computations. To do this, we will be combining both the free Applicative
and the free monad transformer.
We want an Applicative
interface, and the easiest one to make is the one we can get for "free", which aligns nicely with out goal of "freeing the interpeter" as much as possible. This suggests our type is going to look like
Ap f a
for some Functor
f
and any a
. We'd like the underlying computation to be over some Monad
, and Monad
s are functors, but we'd like to "free" the interpreter as much as posssible. We'll grab the free monad transformer as the underlying functor for Ap
, giving us
Ap (FreeT f m) a
for some Functor
f
, some Monad
m
, and any a
. We know the Monad
m
is probably going to be IO
, but we'll leave our code as generic as possible. We just need to provide the Functor
for FreeT
. All Applicatives
are Functors
, so Ap
itself could be used for f
, we'd write something like
type ApT m a = Ap (FreeT (ApT m) m) a
This gives the compiler fits, so instead we'll mover the Ap
inside and define
newtype ApT m a = ApT {unApT :: FreeT (Ap (ApT m)) m a}
We'll derive some instances for this and discuss its real motivation after an interlude.
Interlude
To run all of this code, you'll need the following. The Map
and Control.Concurrent
are only needed for sharing computations, more on that much later.
{-# LANGUAGE GADTs #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
module Main where
import Control.Monad.Trans.Class
import Control.Monad.IO.Class
import Control.Monad.Trans.Reader
import Control.Applicative
import Control.Applicative.Free hiding (Pure)
import qualified Control.Applicative.Free as Ap (Ap(Pure))
import Control.Monad.Trans.Free
import qualified Data.Map as Map
import Control.Concurrent
Stuffing it
I mislead you in the previous section, and pretended to discover ApT
from resoning about the problem. I actually discovered ApT
by trying anything and everything to try to stuff Monad
ic computations into an Applicative
and be able to control their order when it came out. For a long time, I was trying to solve how to implement mapApM
(below) in order to write flipImage
(my replacement for your blur
). Here's the ApT
Monad
transformer in all its glory. It's intended to be used as the Functor
for an Ap
, and, by using Ap
as its own Functor
for FreeT
, can magically stuff values into an Applicative
that shouldn't seem possible.
newtype ApT m a = ApT {unApT :: FreeT (Ap (ApT m)) m a}
deriving (Functor, Applicative, Monad, MonadIO)
It could derive even more instances from FreeT
, these are just the ones we need. It can't derive MonadTrans
, but we can do that ourselves:
instance MonadTrans ApT where
lift = ApT . lift
runApT :: ApT m a -> m (FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a))
runApT = runFreeT . unApT
The real beauty of ApT
is we can write some seemingly impossible code like
stuffM :: (Functor m, Monad m) => m (ApT m a) -> ApT m a
stuffMAp :: (Functor m, Monad m) => m (ApT m a) -> Ap (ApT m) a
The m
on the outside disappeares, even into Ap
that's merely Applicative
.
This works because of the following cycle of functions, each of which can stuff the output from the function above it into the input of the function below it. The first function starts with an ApT m a
, and the last one ends with one. (These definitions aren't part of the program)
liftAp' :: ApT m a ->
Ap (ApT m) a
liftAp' = liftAp
fmapReturn :: (Monad m) =>
Ap (ApT m) a ->
Ap (ApT m) (FreeT (Ap (ApT m)) m a)
fmapReturn = fmap return
free' :: Ap (ApT m) (FreeT (Ap (ApT m)) m a) ->
FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)
free' = Free
pure' :: a ->
FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)
pure' = Pure
return' :: (Monad m) =>
FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a) ->
m (FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a))
return' = return
freeT :: m (FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)) ->
FreeT (Ap (ApT m)) m a
freeT = FreeT
apT :: FreeT (Ap (ApT m)) m a ->
ApT m a
apT = ApT
This lets us write
-- Get rid of an Ap by stuffing it into an ApT.
stuffAp :: (Monad m) => Ap (ApT m) a -> ApT m a
stuffAp = ApT . FreeT . return . Free . fmap return
-- Stuff ApT into Free
stuffApTFree :: (Monad m) => ApT m a -> FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)
stuffApTFree = Free . fmap return . liftAp
-- Get rid of an m by stuffing it into an ApT
stuffM :: (Functor m, Monad m) => m (ApT m a) -> ApT m a
stuffM = ApT . FreeT . fmap stuffApTFree
-- Get rid of an m by stuffing it into an Ap
stuffMAp :: (Functor m, Monad m) => m (ApT m a) -> Ap (ApT m) a
stuffMAp = liftAp . stuffM
And some utility functions for working on a transformer stack
mapFreeT :: (Functor f, Functor m, Monad m) => (m a -> m b) -> FreeT f m a -> FreeT f m b
mapFreeT f fa = do
a <- fa
FreeT . fmap Pure . f . return $ a
mapApT :: (Functor m, Monad m) => (m a -> m b) -> ApT m a -> ApT m b
mapApT f = ApT . mapFreeT f . unApT
mapApM :: (Functor m, Monad m) => (m a -> m b) -> Ap (ApT m) a -> Ap (ApT m) b
mapApM f = liftAp . mapApT f . stuffAp
We'd like to start writing our example image processors, but first we need to take another diversion to address a hard requirement.
A hard requirement - input sharing
Your first example shows
-- timeShift(*2) --
-- / \
-- readImage -- addImages -> out
-- \ /
-- blur ----------
implying that the result of readImage
should be shared between blur
and timeShift(*2)
. I take this to mean that the results of readImage
should only be computed once for each time.
Applicative
isn't powerful enough to capture this. We'll make a new typeclass to represent computations whose output can be divided into multiple identical streams.
-- The class of things where input can be shared and divided among multiple parts
class Applicative f => Divisible f where
(<\>) :: (f a -> f b) -> f a -> f b
We'll make a transformer that adds this capability to existing Applicative
s
-- A transformer that adds input sharing
data LetT f a where
NoLet :: f a -> LetT f a
Let :: LetT f b -> (LetT f b -> LetT f a) -> LetT f a
And provide some utility functions and instances for it
-- A transformer that adds input sharing
data LetT f a where
NoLet :: f a -> LetT f a
Let :: LetT f b -> (LetT f b -> LetT f a) -> LetT f a
liftLetT :: f a -> LetT f a
liftLetT = NoLet
mapLetT :: (f a -> f b) -> LetT f a -> LetT f b
mapLetT f = go
where
go (NoLet a) = NoLet (f a)
go (Let b g) = Let b (go . g)
instance (Applicative f) => Functor (LetT f) where
fmap f = mapLetT (fmap f)
-- I haven't checked that these obey the Applicative laws.
instance (Applicative f) => Applicative (LetT f) where
pure = NoLet . pure
NoLet f <*> a = mapLetT (f <*>) a
Let c h <*> a = Let c ((<*> a) . h)
instance (Applicative f) => Divisible (LetT f) where
(<\>) = flip Let
Image processors
With all of our transformers in place, we can start writing our image processors. At the bottom of our stack we have our ApT
from an earlier section
Ap (ApT IO)
The computations need to be able to read the time from the environment, so we'll add a ReaderT
for that
ReaderT Int (Ap (ApT IO))
Finally, we'd like to be able to share computations, so we'll add out LetT
transformer on top, giving the entire type IP
for our image processors
type Image = String
type IP = LetT (ReaderT Int (Ap (ApT IO)))
We'll read images from IO
. getLine
makes fun interactive examples.
readImage :: Int -> IP Image
readImage n = liftLetT $ ReaderT (\t -> liftAp . liftIO $ do
putStrLn $ "[" ++ show n ++ "] reading image for time: " ++ show t
--getLine
return $ "|image [" ++ show n ++ "] for time: " ++ show t ++ "|"
)
We can shift the time of inputs
timeShift :: (Int -> Int) -> IP a -> IP a
timeShift f = mapLetT shift
where
shift (ReaderT g) = ReaderT (g . f)
Add multiple images together
addImages :: Applicative f => [f Image] -> f Image
addImages = foldl (liftA2 (++)) (pure [])
And flip images pretending to use some library that's stuck in IO
. I couldn't figure out how to blur
a string...
inIO :: (IO a -> IO b) -> IP a -> IP b
inIO = mapLetT . mapReaderT . mapApM
flipImage :: IP [a] -> IP [a]
flipImage = inIO flip'
where
flip' ma = do
a <- ma
putStrLn "flipping"
return . reverse $ a
Interpreting LetT
Our LetT
for sharing results is at the top of our transformer stack. We'll need to interpret it to get at the computations underneath it. To interpret LetT
we will need a way to share results in IO
, which memoize
provides, and an interpeter that removes the LetT
transformer from the top of the stack.
To share computations we need to store them somewhere, this memoize
s an IO
computation in IO
, making sure it happens only once even across multiple threads.
memoize :: (Ord k) => (k -> IO a) -> IO (k -> IO a)
memoize definition = do
cache <- newMVar Map.empty
let populateCache k map = do
case Map.lookup k map of
Just a -> return (map, a)
Nothing -> do
a <- definition k
return (Map.insert k a map, a)
let fromCache k = do
map <- readMVar cache
case Map.lookup k map of
Just a -> return a
Nothing -> modifyMVar cache (populateCache k)
return fromCache
In order to interpret a Let
, we need an evaluator for the underlying ApT IO
to incorporate into the definitions for the Let
bindings. Since the result of computations depends on the environment read from the ReaderT
, we will incorporate dealing with the ReaderT
into this step. A more sophisticated approach would use transformer classes, but transformer classes for Applicative
is a topic for a different question.
compileIP :: (forall x. ApT IO x -> IO x) -> IP a -> IO (Int -> ApT IO a)
compileIP eval (NoLet (ReaderT f)) = return (stuffAp . f)
compileIP eval (Let b lf) = do
cb <- compileIP eval b
mb <- memoize (eval . cb)
compileIP eval . lf . NoLet $ ReaderT (liftAp . lift . mb)
Interpreting ApT
Our interpreter uses the following State
to avoid needing to peek inside AsT
, FreeT
, and FreeF
all the time.
data State m a where
InPure :: a -> State m a
InAp :: State m b -> State m (b -> State m a) -> State m a
InM :: m a -> State m a
instance Functor m => Functor (State m) where
fmap f (InPure a) = InPure (f a)
fmap f (InAp b sa) = InAp b (fmap (fmap (fmap f)) sa)
fmap f (InM m) = InM (fmap f m)
Interpereting Ap
is harder than it looks. The goal is to take data that's in Ap.Pure
and put it in InPure
and data that's in Ap
and put it in InAp
. interpretAp
actually needs to call itself with a larger type each time it goes into a deeper Ap
; the function keeps picking up another argument. The first argument t
provides a way to simplify these otherwise exploding types.
interpretAp :: (Functor m) => (a -> State m b) -> Ap m a -> State m b
interpretAp t (Ap.Pure a) = t a
interpretAp t (Ap mb ap) = InAp sb sf
where
sb = InM mb
sf = interpretAp (InPure . (t .)) $ ap
interperetApT
gets data out of ApT
, FreeT
, and FreeF
and into State m
interpretApT :: (Functor m, Monad m) => ApT m a -> m (State (ApT m) a)
interpretApT = (fmap inAp) . runApT
where
inAp (Pure a) = InPure a
inAp (Free ap) = interpretAp (InM . ApT) $ ap
With these simple interpreting pieces we can make strategies for interpreting results. Each strategy is a function from the interpreter's State
to a new State
, with possible side effect happening on the way. The order the strategy chooses to execute side effects in determines the order of the side effects. We'll make two example strategies.
The first strategy performs only one step on everything that's ready to be computed, and combines results when they are ready. This is probably the strategy that you want.
stepFB :: (Functor m, Monad m) => State (ApT m) a -> m (State (ApT m) a)
stepFB (InM ma) = interpretApT ma
stepFB (InPure a) = return (InPure a)
stepFB (InAp b f) = do
sf <- stepFB f
sb <- stepFB b
case (sf, sb) of
(InPure f, InPure b) -> return (f b)
otherwise -> return (InAp sb sf)
This other strategy performs all the calculations as soon as it knows about them. It performs them all in a single pass.
allFB :: (Functor m, Monad m) => State (ApT m) a -> m (State (ApT m) a)
allFB (InM ma) = interpretApT ma
allFB (InPure a) = return (InPure a)
allFB (InAp b f) = do
sf <- allFB f
sb <- allFB b
case (sf, sb) of
(InPure f, InPure b) -> return (f b)
otherwise -> allFB (InAp sb sf)
Many, many other strategies are possible.
We can evaluate a strategy by running it until it produces a single result.
untilPure :: (Monad m) => ((State f a) -> m (State f a)) -> State f a -> m a
untilPure s = go
where
go state =
case state of
(InPure a) -> return a
otherwise -> s state >>= go
Executing the intepreter
To execute the interpreter, we need some example data. Here are a few interesting examples.
example1 = (\i -> addImages [timeShift (*2) i, flipImage i]) <\> readImage 1
example1' = (\i -> addImages [timeShift (*2) i, flipImage i, flipImage . timeShift (*2) $ i]) <\> readImage 1
example1'' = (\i -> readImage 2) <\> readImage 1
example2 = addImages [timeShift (*2) . flipImage $ readImage 1, flipImage $ readImage 2]
The LetT
interpreter needs to know what evaluator to use for bound values, so we'll define our evaluator only once. A single interpretApT
kicks off the evaluation by finding the initial State
of the interpreter.
evaluator :: ApT IO x -> IO x
evaluator = (>>= untilPure stepFB) . interpretApT
We'll compile example2
, which is essentially your example, and run it for time 5.
main = do
f <- compileIP evaluator example2
a <- evaluator . f $ 5
print a
Which produces almost the desired result, with all reads happening before any flips.
[2] reading image for time: 5
[1] reading image for time: 10
flipping
flipping
"|01 :emit rof ]1[ egami||5 :emit rof ]2[ egami|"
A Monad
can not reorder the component steps that make up img1
and img2
in
addImage :: (Monad m) => m [i] -> m [i] -> m [i]
addImage img1 img2 = do
i1 <- img1
i2 <- img2
return $ i1 ++ i2
if there exists any m [i]
whose result depends on a side effect. Any MonadIO m
has an m [i]
whose result depends on a side effect, therefore you cannot reorder the component steps of img1
and img2
.
The above desugars to
addImage :: (Monad m) => m [i] -> m [i] -> m [i]
addImage img1 img2 =
img1 >>=
(\i1 ->
img2 >>=
(\i2 ->
return (i1 ++ i2)
)
)
Let's focus on the first >>=
(remembering that (>>=) :: forall a b. m a -> (a -> m b) -> m b
). Specialized for our type, this is (>>=) :: m [i] -> ([i] -> m [i]) -> m [i]
. If we are going to implement it, we'd have to write something like
(img1 :: m [i]) >>= (f :: [i] -> m [i]) = ...
In order to do anything with f
, we need to pass it an [i]
. The only correct [i]
we have is stuck inside img1 :: m [i]
. We need the result of img1
to do anything with f
. There are now two possibilities. We either can or can not determine the result of img1
without executing its side effects. We will examine both cases, starting with when we can not.
can not
When we can not determine the result of img1
without executing its side effects, we have only one choice - we must execute img1
and all of its side effects. We now have an [i]
, but all of img1
s side effects have already been executed. There's no way we can execute any of the side effects from img2
before some of the side effects of img1
because the side effects of img1
have already happened.
can
If we can determine the result of img1
without executing its side effects, we're in luck. We find the result of img1
and pass that to f
, getting a new m [i]
holding the result we want. We can now examine the side effects of both img1
and the new m [i]
and reorder them (although there's a huge caveat here about the associative law for >>=
).
the problem at hand
As this applies to our case, for any MonadIO
, there exists the following, whose result can not be determined without executing its side effects, placing us firmly in the can not case where we can not re-order side effects.
counterExample :: (MonadIO m) => m String
counterExample = liftIO getLine
There are also many other counter examples, such as anything like readImage1
or readImage2
that must actually read the image from IO
.