I am thinking about exploiting parallelism for one problem I am trying to solve. The problem is roughly this: given input (sequence of points) find a best output (biggest triangle composed from these points, longest line etc.). There are 3 different 'shapes' to be found in the sequence of points, however I am interested only in the one with 'best score' (usually some form of 'length' times coefficient). Let's call the shapes S1, S2, S3.
I have 2 different algorithms for solving S1 - 'S1a' is in O(n2), 'S1b' mostly behaves better, but the worst case is about O(n4).
First question: is there some simple way to run S1a and S1b in parallel, use the one that finishes first and stop the other? As far as I am reading documentation, this could be programmed using some forkIO and killing the threads when a result is obtained - just asking if there is something simpler?
Second question - much tougher: I am calling the optimization function this way:
optimize valueOfSx input
valueOfSx is specific for every shape and returns a 'score' (or a guess of a score) a possible solution. Optimize calls this function to find out best solution. What I am interested in is:
s1 = optimize valueOfS1 input
s2 = optimize valueOfS2 input
s3 = optimize valueOfS3 input
<- maximum [s1,s2,s3]
However, if I know the result of S1, I can discard all solutions that are smaller, thus making s2 and s3 converge faster if no better solution exists (or at least throw away the worst solutions and thus be more space efficient). What I am doing now is:
zeroOn threshold f = decide .f
where decide x = if (x < threshold) then 0 else x
s1 = optimize valueOfS1 input
s2 = optimize (zeroOn s1 valueOfS2) input
s3 = optimize (zeroOn (max s1 s2) valueOfS3) input
The question is: can I run e.g. S2 and S3 in parallel in such a way, that whichever finishes first would update the 'threshold' parameter of the score function running in the other thread? Something in the sense of:
threshold = 0
firstSolution = firstOf (optimize (zeroOn threshold valueOfS2), optimize (zeroOn threshold valueofS3))
update threshold from firstSolution
wait for second solution
Ultimately, any solution will wind up using ForkIO under the hood because you want multiple transactions to be occurring in parallel. Even Conal's unamb works this way.
For the latter you probably want a more complicated monad that batches up and runs for a while before checking an MVar occasionally for a monotonically posted improving value, but the simplest answer to interleave (within one thread) is to just write a Partiality monad.
With an appropriate MonadFix instance to deal with recursion or liberally sprinkled 'yield' calls, you can perform both of your operations in the Partial monad and race them to obtain a deterministic result.
But in practice, if you want to get the full benefit of parallelism you'll want to update and check some kind of 'improving' MVar periodically.
Something like (off the cuff, sorry, no compiler handy!):
Of course, that should be able to be rewritten to support any idempotent commutative monoid, not just max.
For the first question, Check out Conal Elliott's
unamb
: http://hackage.haskell.org/package/unamb