OpenMP/__gnu_parallel for an unordered_map

2019-02-27 10:42发布

问题:

At some point in my code I have to make operations on all elements in an unordered_map. In order to accelerate this process I want to use openMP, but the naive approach does not work:

std::unordered_map<size_t, double> hastTable;

#pragma omp for
for(auto it = hastTable.begin();
    it != hastTable.end();
    it ++){
//do something
}

The reason for this is, that the iterator of an unordered_map is no random access iterator. As an alternative I have tried the __gnu_parallel directives working on for_each. But the following code

#include <parallel/algorithm>
#include <omp.h>

__gnu_parallel::for_each (hashTable.begin(), hashTable.end(),[](std::pair<const size_t, double> & item)
                        {
                          //do something with item.secon
                        });

compiled with (gcc 4.8.2)

 g++ -fopenmp -march=native -std=c++11

does not run parallel. Switching the unordered_map with a vector and using the same __gnu_parallel directive runs in parallel.

Why does it not run in parallel in case of the unordered map? Are there workarounds?

In the following I give you some simple code, which reproduces my problem.

#include <unordered_map>
#include <parallel/algorithm>
#include <omp.h>

int main(){

//unordered_map                                                                                                                                      
std::unordered_map<size_t, double> hashTable;
double val = 1.;
for(size_t i = 0; i<100000000; i++){
  hashTable.emplace(i, val);
  val += 1.;
}
__gnu_parallel::for_each (hashTable.begin(), hashTable.end(),[](std::pair<const size_t, double> & item)
                        {
                          item.second *= 2.;
                        });

//vector                                                                                                                                             
std::vector<double> simpleVector;
val = 1.;
for(size_t i = 0; i<100000000; i++){
  simpleVector.push_back(val);
  val += 1.;
}
__gnu_parallel::for_each (simpleVector.begin(), simpleVector.end(),[](double & item)
                        {
                          item *= 2.;
                        });

}

I am looking forward to your answers.

回答1:

You could split a loop over ranges of bucket indices, then create an intra-bucket iterator to handle elements. unordered_map has .bucket_count() and the bucket-specific iterator-yielding begin(bucket_number), end(bucket_number) that allow this. Assuming you haven't modified the default max_load_factor() from 1.0 and have a reasonable hash function, you'll average 1 element per bucket and shouldn't be wasting too much time on empty buckets.



回答2:

The canonical approach with containers that do not support random iterators is to use explicit OpenMP tasks:

std::unordered_map<size_t, double> hastTable;

#pragma omp parallel
{
   #pragma omp single
   {
      for(auto it = hastTable.begin(); it != hastTable.end(); it++) {
         #pragma omp task
         {
            //do something
         }
      }
   }
}

This creates a separate task for each iteration which brings some overhead and therefore is only meaningful when //do something actually means //do quite a bit of work.