Does anyone know of a way to get the Parallel.Foreach loop to use chunk partitioning versus, what i believe is range partitioning by default. It seems simple when working with arrays because you can just create a custom partitioner and set load-balancing to true.
Since the number of elements in an IEnumerable isn't known until runtime I can't seem to figure out a good way to get chunk partitioning to work.
Any help would be appreciated.
thanks!
The tasks i'm trying to perform on each object take significantly different times to perform. At the end i'm usually waiting hours for the last thread to finish its work. What I'm trying to achieve is to have the parallel loop request chunks along the way instead of pre-allocating items to each thread.
If your IEnumerable was really something that had a an indexer (i.e you could do obj[1]
to get a item out) you could do the following
var rangePartitioner = Partitioner.Create(0, source.Length);
Parallel.ForEach(rangePartitioner, (range, loopState) =>
{
// Loop over each range element without a delegate invocation.
for (int i = range.Item1; i < range.Item2; i++)
{
var item = source[i]
//Do work on item
}
});
However if it can't do that you must write a custom partitioner by creating a new class derived from System.Collections.Concurrent.Partitioner<TSource>
. That subject is too broad to cover in a SO answer but you can take a look at this guide on the MSDN to get you started.
UPDATE: As of .NET 4.5 they added a Partitioner.Create
overload that does not buffer data, it has the same effect of making a custom partitioner with a range max size of 1. With this you won't get a single thread that has a bunch of queued up work if it got unlucky with a bunch of slow items in a row.
var partitoner = Partitioner.Create(source, EnumerablePartitionerOptions.NoBuffering);
Parallel.ForEach(partitoner, item =>
{
//Do work
}
The MSDN Samples for Parallel Programming with the .NET Framework contain an implementation of a ChunkPartitioner
. It's contained in the ParallelExtensionsExtra
project.