Is there a preexisting library that will let me create array-like objects which have the following properties:
- Run time size specification (chosen at instantition, not grown or shrunk afterwards)
- Operators overloaded to perform element wise operations (i.e.
c=a+b
will result in a vector c
with c[i]=a[i]+b[i]
for all i
, and similarly for *
, -
, /
, etc)
- A good set of functions which act elementwise, for example
x=sqrt(vec)
will have elements x[i]=sqrt(vec[i])
- Provide "summarising" functions such as
sum(vec)
, mean(vec)
etc
- (Optional) Operations can be sent to a GPU for processing.
Basically something like the way arrays work in Fortran, with all of the implementation hidden. Currently I am using vector
from the STL and manually overloading the operators, but I feel like this is probably a solved problem.
In the dusty corners of standard library, long forgotten by everyone, sits a class called valarray
. Look it up and see if it suits your needs.
From manual page at cppreference.com:
std::valarray
is the class for representing and manipulating arrays of values. It supports element-wise mathematical operations and various forms of generalized subscript operators, slicing and indirect access.
A code snippet for illustration:
#include <valarray>
#include <algorithm>
#include <iterator>
#include <iostream>
int main()
{
std::valarray<int> a { 1, 2, 3, 4, 5};
std::valarray<int> b = a;
std::valarray<int> c = a + b;
std::copy(begin(c), end(c),
std::ostream_iterator<int>(std::cout, " "));
}
Output: 2 4 6 8 10
You can use Cilk Plus Extentions (https://www.cilkplus.org/) that provides array notation by applying element-wise operations to arrays of the same shape for C/C++. It explores the vector parallelism from your processor as well co-processor.
Example:
Standard C code:
for (i=0; i<MAX; i++)
c[i]=a[i]+b[i];
Cilk Plus - Array notation:
c[i:MAX]=a[i:MAX]+b[i:MAX];
Stride sections like:
float d[10] = {0,1,2,3,4,5,6,7,8,9};
float x[3];
x[:] = d[0:3:2]; //x contains 0,2,4 values
You can use reductions on arrays sections:
_sec_reduce_add(a[0:n]);
Interest reading:
http://software.intel.com/en-us/articles/getting-started-with-intel-cilk-plus-array-notations
The Thrust library, which is part of the CUDA toolkit, provides an STL-like interface for vector operations on GPUs. It also has an OpenMP back end, however the GPU support utilizes CUDA, so you are limited to NVIDIA GPUs. You will have to do your own wrapping (say with expression templates) if you want to have expressions like c=a+b work for vectors
https://code.google.com/p/thrust/
The VienaCL library takes a more high level approach, providing vector and matrix operations like you want. It has both CUDA and OpenCL back ends, so you can use GPUs (and multi-core CPUs) from different vendors.
http://viennacl.sourceforge.net/
The vexcl library also looks very promising (again with support for both OpenCL and CUDA)
https://github.com/ddemidov/vexcl