If using Haskell as a library being called from my C program, what is the performance impact of making calls in to it? For instance if I have a problem world data set of say 20kB of data, and I want to run something like:
// Go through my 1000 actors and have them make a decision based on
// HaskellCode() function, which is compiled Haskell I'm accessing through
// the FFI. As an argument, send in the SAME 20kB of data to EACH of these
// function calls, and some actor specific data
// The 20kB constant data defines the environment and the actor specific
// data could be their personality or state
for(i = 0; i < 1000; i++)
actor[i].decision = HaskellCode(20kB of data here, actor[i].personality);
What's going to happen here - is it going to be possible for me to keep that 20kB of data as a global immutable reference somewhere that is accessed by the Haskell code, or must I create a copy of that data each time through?
The concern is that this data could be larger, much larger - I also hope to write algorithms that act on much larger sets of data, using the same pattern of immutable data being used by several calls of the Haskell code.
Also, I'd like to parallelize this, like a dispatch_apply() GCD or Parallel.ForEach(..) C#. My rationale for parallelization outside of Haskell is that I know I will always be operating on many separate function calls i.e. 1000 actors, so using fine-grained parallelization inside Haskell function is no better than managing it at the C level. Is running FFI Haskell instances 'Thread Safe' and how do I achieve this - do I need to initialize a Haskell instance every time I kick off a parallel run? (Seems slow if I must..) How do I achieve this with good performance?
Assuming you start the Haskell runtime up only once (like this), on my machine, making a function call from C into Haskell, passing an Int back and forth across the boundary, takes about 80,000 cycles (31,000 ns on my Core 2) -- determined experimentally via the rdstc register
Yes, that is certainly possible. If the data really is immutable, then you get the same result whether you:
IORef
on the Haskell side.Which strategy is best? It depends on the data type. The most idiomatic way would be to pass a reference to the C data back and forth, treating it as a
ByteString
orVector
on the Haskell side.I'd strongly recommend inverting the control then, and doing the parallelization from the Haskell runtime -- it'll be much more robust, as that path has been heavily tested.
Regarding thread safety, it is apparently safe to make parallel calls to
foreign exported
functions running in the same runtime -- though fairly sure no one has tried this in order to gain parallelism. Calls in acquire a capability, which is essentially a lock, so multiple calls may block, reducing your chances for parallelism. In the multicore case (e.g.-N4
or so) your results may be different (multiple capabilities are available), however, this is almost certainly a bad way to improve performance.Again, making many parallel functions calls from Haskell via
forkIO
is a better documented, better tested path, with less overhead than doing the work on the C side, and probably less code in the end.Just make a call into your Haskell function, that in turn will do the parallelism via many Haskell threads. Easy!
Haskell can peek into that 20k blob if you pass the pointer.
Disclaimer: I have no experience with the FFI.
But it seems to me that if you want to reuse the 20 Kb of data so you're not passing it every time, then you could simply have a method that takes a list of "personalities", and returns a list of "decisions".
So if you have a function
Then why not make a helper function
And invoke that? Using this way, though, if you wanted to parallelize, you would need to do it Haskell-side with parallel lists and parallel map.
I defer to the experts to explain if/how C arrays can be marshaled into Haskell lists (or similar structure) easily.
I use a mix of C and Haskell threads for one of my applications and haven't noticed that much of a performance hit switching between the two. So I crafted a simple benchmark... which is quite a bit faster/cheaper than Don's. This is measuring 10 million iterations on a 2.66GHz i7:
Compiled with GHC 7.0.3/x86_64 and gcc-4.2.1 on OSX 10.6
Haskell:
And an OSX C++ app to drive it, should be simple to adjust to Windows or Linux: