Apparently, after profiling my (scientific computation) C++ code, 25% (!) of the time is spent with calls to vector::operator[]
. True, my code spends all of its time reading and writing in vector<float>
s (and a few vector<int>
s too), but still, I'd like to know if there's supposed to be some significant overhead of operator[]
compared to C-style arrays?
(I've seen another related question on SO, but regarding []
vs at()
-- but apparently even []
is too slow for me?!)
Thanks, Antony
(edit: just for info: using g++ -O3 version 4.5.2 on Ubuntu)
In general, there should be no signficant difference. Differences can occur in practice, however, for various reasons, depending on how the compiler optimizes a specific bit of code. One significant possible difference: you're profiling, which means that you're executing instrumented code. I don't know what profiler you're using, but it's frequent for the compiler to turn off inlining for various reasons when instrumenting for profiling. Are you sure that this isn't the case here, and that this is artificially causing the indexation to appear to take a greater percent of time than if it were inlined.
A pure array access is an (almost) direct memory read, whereas operator[] is a member method of vector<>.
If properly inlined, it should be the same, if not, the overhead is very significant for computation intensive work.
Yes, there will be some overhead as typically a
vector
will contain a pointer to a dynamically allocated array where as an array is just "there". This means there will typically be an extra memory dereference invector::operator[]
over using[]
on an array. (Note that if you have a pointer to an array this is typically no better than avector
.)If you are performing multiple accesses through the same
vector
or pointer in the same section of code without causing the vector to perhaps need reallocating, then the cost of this extra dereference may be shared over the multiple accesses and may well be negligible.E.g.
generates the following code for me on g++ (some guff trimmed):
Note how the pointer and vector version produce exactly the same code, with only the array version "winning".
std::vector::operator[]
should be rather efficient, however the compiler must be paranoid and for every call made to a function it must assume that the vector could have been moved somewhere else in memory.For example in this code
if the code of
foo
isn't known in advance the compiler is forced to reload the address of vector start every time because the vector could have been reallocated as a consequence of code insidefoo()
.If you know for sure that the vector is not going to be moved in memory or reallocated then you can factor out this lookup operation with something like
with this approach one memory lookup operation can be saved (
vptr
is likely to end up in a register for the whole loop).Also another reason for inefficiency may be cache trashing. To see if this is the problem an easy trick is to just over-allocate your vectors by some uneven number of elements.
The reason is that because of how caching works if you have many vectors with e.g. 4096 elements all of them will happen to have the same low-order bits in the address and you may end up losing a lot of speed because of cache line invalidations. For example this loop on my PC
executes in about 8.1 seconds if
n == 8191
and in 3.2 seconds ifn == 10000
. Note that the inner loop is always from 0 to 999, independently of the value ofn
; what is different is just the memory address.Depending on the processor/architecture I've observed even 10x slowdowns because of cache trashing.
In a modern compiler, in release mode, with optimisations enabled, there is no overhead in using
operator []
compared to raw pointers: the call is completely inlined and resolves to just a pointer access.I’m guessing that you are somehow copying the return value in the assignment and that this is causing the real 25% time spent in the instruction.[Not relevant forfloat
andint
]Or the rest of your code is simply blazingly fast.