I'm implementing a combinatorial optimization algorithm in Haskell:
Given an initial candidate solution, repeat until stopping criteria are met:
1. Determine possible moves
2. Evaluate possible moves
3. Choose a move
4. Make move, record new candidate solution, update search state
I could write functions for steps 1-4 and chain them together inside a recursive function to handle looping and passing state from one iteration to the next, but I have a vague idea that monads apply.
What's the best way to express this kind of procedure in Haskell?
The best way to express this sort of iterative procedure in Haskell is as an infinite list of each successive result. Piecing together your four steps yields a notion of a function from a solution to a different (better) solution; all you need to do is apply this infinitely many times. The user of your function can then use any list function to get the answer: solve s0 !! numIterations
, or find stoppingCondition $ solve s0
, or whatever you want.
In order to get here, let's write out the types for each of these functions.
moves :: Solution -> [Move]
Given a possible solution, figure out the possible changes you can make.
value :: Solution -> Move -> Double
Given a solution and a move, evaluate it and record that value as some real number.
choose :: Solution -> [Move] -> Move
Given a solution and a list of moves, pick the best one.
apply :: Solution -> Move -> Solution
Given a move, apply it to an existing solution to get a new one.
You want to write a function with a type something like solve :: Solution -> (Solution -> Bool) -> Solution
which takes an initial solution and a stopping condition to execute your algorithm.
Instead, let's make this an infinite list; this means that you'll just remove the predicate and have Solution -> [Solution]
.
import Data.Ord
import Data.List
-- moves, value, and apply are domain-specific
choose :: Solution -> [Move] -> Move
choose s ms = maximumBy (comparing $ value s) ms
solve :: Solution -> [Solution]
solve = iterate $ \s -> apply s . choose s $ moves s
Here, the key is iterate :: (a -> a) -> a -> [a]
, which repeatedly applies a function to a value and gives you the results—exactly the description of your algorithm.
However, the way I'd really write this would be the following:
import Data.Ord
import Data.List
solve :: Ord o => (s -> [m]) -> (s -> m -> o) -> (s -> m -> s) -> s -> [s]
solve moves value apply = iterate step
where step s = apply s . choose s $ moves s
choose s = maximumBy (comparing $ value s)
The advantage of this is that you can reuse this same generic structure for any problem domain. All you need to do is to provide the moves
, value
, and apply
functions! And depending on my mood, I might rewrite that as this:
import Control.Applicative
import Data.Ord
import Data.List
solve :: Ord o => (s -> [m]) -> (s -> m -> o) -> (s -> m -> s) -> s -> [s]
solve moves value apply = iterate step
where step = (.) <$> apply <*> choose <*> moves
choose = maximumBy . comparing . value
Here, we use applicative notation to say that we're effectively just doing (.) apply choose moves
(which is just apply . choose $ moves
) in a context where each of those functions is implicitly passed a parameter s
(the reader applicative). If we really wanted to tersify things, we could write
import Control.Applicative
import Data.Ord
import Data.List
solve :: Ord o => (s -> [m]) -> (s -> m -> o) -> (s -> m -> s) -> s -> [s]
solve moves value apply =
iterate $ (.) <$> apply <*> maximumBy . comparing . value <*> moves
Any of these snippets will do exactly what you need. (Proviso: there are no effects/monads in any of your functions, so randomness is out. You make this monadic easily, though.)
Just for kicks, though, let's think about the State
monad. This represents a computation with some sort of environment, so that State s a
is isomorphic to s -> (a,s)
—something which can see the state and potentially update it. Here, all the Solution ->
s on the left of your function signatures would disappear, as would the -> Solution
s on the right. That would leave you with
moves :: State Solution [Move]
value :: Move -> State Solution Double
choose :: [Move] -> State Solution Move
apply :: Move -> State Solution ()
This means that you would have some monadic action step
:
import Control.Applicative
import Control.Monad.State
import Data.Ord
import Data.List
choose :: [Move] -> State Solution Move
choose = let val m = do v <- value m
return (m,v)
in fst . maximumBy (comparing snd) <$> mapM val ms
step :: State Solution ()
step = apply =<< choose =<< moves
You could make this more point-free, or make it polymorphic just as above, but I won't do that here. The point is that once you have step
, you can generate answers with runState . last $ replicateM_ numIterations step
, or given a whileM
function, runState $ whileM (stoppingCondition :: State Solution Bool) step
. Again, the user can decide how to stop it. Your moves
and value
functions would probably query the state with get :: State s s
; apply
would probably use modify :: (s -> s) -> State s ()
to tweak the state without needing to pull it back out. You can see the similarity with the structure from above in these types; and in fact, you can see that structure in the definition of step
, as well. Each one says "string together apply
, choose
/value
, and moves
", which is the definition of your algorithm.
The take-home message from both of these is that you want to avoid explicit loops/recursion, as you so rightly realized. If you think about this algorithm imperatively, then the State
monad seems like a natural structure, as it hides exactly those imperative features you were thinking of. However, it has downsides: for instance, everything has become monadic, and—worst of all—functions other than apply
are able to change the saved solution. If you instead imagine this algorithm as producing a new result each time, you get the notion of step :: Solution -> Solution
, and from there you can use iterate
to get a well-behaved infinite list.
Here's a pseudocodey sketch of how you might use the State
monad to thread the search state through the computation:
import Control.Monad.State
type SearchState = ...
type Move = ...
type Fitness = ...
determineMoves :: State SearchState [Move]
determineMoves = do
-- since determineMoves is in the State monad, we can grab the state here
st <- get
...
evaluateMoves :: [Move] -> [(Move, Fitness)]
evaluateMoves = ...
chooseMove :: [(Move, Fitness)] -> Move
chooseMove = ...
-- makeMove is not itself monadic, but operates on the SearchState
-- type we're threading through with the State monad
makeMove :: Move -> SearchState -> SearchState
makeMove m st = ...
loop :: State SearchState ()
loop = do
moves <- determineMoves
let candidates = evaluateMoves moves
move = chooseMove candidates
-- we pass a function (SearchState -> SearchState) to modify in
-- order to update the threaded SearchState
modify (makeMove move)
loop
Notice that even though your main computation is in the state monad, not every component has to be in the monad. Here, evaluateMoves
and chooseMove
are non-monadic, and I've used let
to show you how to explicitly integrate them into a do
block. Once you get comfortable with this style, though, you'll probably want to get comfortable using <$>
(aka fmap
) and function composition to get more succinct:
loop :: State SearchState ()
loop = do
move <- (chooseMove . evaluateMoves) <$> determineMoves
modify (makeMove move)
loop