I am developing some engineering simulations. This involves implementing some long equations such as this equation to calculate stress in a rubber like material:
T = (
mu * (
pow(l1 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a
* (
pow(l1 * l2 * l3, -0.1e1 / 0.3e1)
- l1 * l2 * l3 * pow(l1 * l2 * l3, -0.4e1 / 0.3e1) / 0.3e1
) * pow(l1 * l2 * l3, 0.1e1 / 0.3e1) / l1
- pow(l2 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a / l1 / 0.3e1
- pow(l3 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a / l1 / 0.3e1
) / a
+ K * (l1 * l2 * l3 - 0.1e1) * l2 * l3
) * N1 / l2 / l3
+ (
mu * (
- pow(l1 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a / l2 / 0.3e1
+ pow(l2 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a
* (
pow(l1 * l2 * l3, -0.1e1 / 0.3e1)
- l1 * l2 * l3 * pow(l1 * l2 * l3, -0.4e1 / 0.3e1) / 0.3e1
) * pow(l1 * l2 * l3, 0.1e1 / 0.3e1) / l2
- pow(l3 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a / l2 / 0.3e1
) / a
+ K * (l1 * l2 * l3 - 0.1e1) * l1 * l3
) * N2 / l1 / l3
+ (
mu * (
- pow(l1 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a / l3 / 0.3e1
- pow(l2 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a / l3 / 0.3e1
+ pow(l3 * pow(l1 * l2 * l3, -0.1e1 / 0.3e1), a) * a
* (
pow(l1 * l2 * l3, -0.1e1 / 0.3e1)
- l1 * l2 * l3 * pow(l1 * l2 * l3, -0.4e1 / 0.3e1) / 0.3e1
) * pow(l1 * l2 * l3, 0.1e1 / 0.3e1) / l3
) / a
+ K * (l1 * l2 * l3 - 0.1e1) * l1 * l2
) * N3 / l1 / l2;
I use Maple to generate the C++ code to avoid mistakes (and save time with tedious algebra). As this code is executed thousands (if not millions) of times, the performance is a concern. Unfortunately the math only simplifies so far; the long equations are unavoidable.
What approach can I take to optimize this implementation? I'm looking for high-level strategies that I should be applying when implementing such equations, not necessarily specific optimizations for the example shown above.
I'm compiling using g++ with --enable-optimize=-O3
.
Update:
I know there are a lot of repeated expressions, I am using the assumption that the compiler would handle these; my tests so far suggest it does.
l1, l2, l3, mu, a, K
are all positive real numbers (not zero).
I have replaced l1*l2*l3
with an equivalent variable: J
. This did help improve performance.
Replacing pow(x, 0.1e1/0.3e1)
with cbrt(x)
was a good suggestion.
This will be run on CPUs, In the near future this would likely run better on GPUs, but for now that option is not available.
It looks like you have a lot of repeated operations going on.
You could pre-calculate those so you are not repeatedly calling the
pow
function which can be expensive.You could also pre-calutate
as you use that term repeatedly.
If you have a Nvidia CUDA graphics card, you could consider offloading the calculations to the graphics card - which itself is more suitable for computationally complicated calculations.
https://developer.nvidia.com/how-to-cuda-c-cpp
If not, you may want to consider multiple threads for calculations.
I've attempted a manual simplification of that formula, would like to know if it saves anything?
[ADDED] I've worked some more on the last three-lines formula and got it down to this beauty:
Let me show my work, step by step:
As you explicitly asked about high level optimizations, it might be worth trying different C++ compilers. Nowadays, compilers are very complex optimization beasts and CPU vendors might implement very powerful and specific optimizations. But please note, some of them are not free (but there might be a free academic program).
I've seen code snippets differ in execution speed by the factor of 2, only by changing the compiler (with full optimizations of course). But be aware of checking the identity of the output. To aggressive optimization might lead to different output, which is something you definitely want to avoid.
Good Luck!
David Hammen's answer is good, but still far from optimal. Let's continue with his last expression (at the time of writing this)
which can be optimised further. In particular, we can avoid the call to
cbrt()
and one of the calls topow()
if exploiting some mathematical identities. Let's do this again step by step.Note that I have also optimised
2.0*N1
toN1+N1
etc. Next, we can do with only two calls topow()
.Since the calls to
pow()
are by far the most costly operation here, it is worth to reduce them as far as possible (the next costly operation was the call tocbrt()
, which we eliminated).If by any chance
a
is integer, the calls topow
could be optimized to calls tocbrt
(plus integer powers), or ifathird
is half-integer, we can usesqrt
(plus integer powers). Furthermore, if by any chancel1==l2
orl1==l3
orl2==l3
one or both calls topow
can be eliminated. So, it's worth to consider these as special cases if such chances realistically exist.Edit summary
pow(x, 0.1e1/0.3e1)
is the same ascbrt(x)
.strike) those edits and pushed them to the bottom of the current revision of this answer. However, I did not delete them. I'm human. It's easy for us to make a mistake.l1
,l2
, andl3
are positive real numbers and ifa
is a non-zero real number. (We have yet to hear from the OP regarding the specific nature of these coefficients. Given the nature of the problem, these are reasonable assumptions.)First things first
Maple and Mathematica sometimes miss the obvious. Even more importantly, the users of Maple and Mathematica sometimes make mistakes. Substituting "oftentimes", or maybe even "almost always", in lieu of "sometimes is probably closer to the mark.
You could have helped Maple simplify that expression by telling it about the parameters in question. In the example at hand, I suspect that
l1
,l2
, andl3
are positive real numbers and thata
is a non-zero real number. If that's the case, tell it that. Those symbolic math programs typically assume the quantities at hand are complex. Restricting the domain lets the program make assumptions that are not valid in the complex numbers.How to simplify those big messes from symbolic math programs (this edit)
Symbolic math programs typically provide the ability to provide information about the various parameters. Use that ability, particularly if your problem involves division or exponentiation. In the example at hand, you could have helped Maple simplify that expression by telling it that
l1
,l2
, andl3
are positive real numbers and thata
is a non-zero real number. If that's the case, tell it that. Those symbolic math programs typically assume the quantities at hand are complex. Restricting the domain lets the program make assumptions such as axbx=(ab)x. This is only ifa
andb
are positive real numbers and ifx
is real. It is not valid in the complex numbers.Ultimately, those symbolic math programs follow algorithms. Help it along. Try playing with expanding, collecting, and simplifying before you generate code. In this case, you could have collected those terms involving a factor of
mu
and those involving a factor ofK
. Reducing an expression to its "simplest form" remains a bit of an art.When you get an ugly mess of generated code, don't accept it as a truth that you must not touch. Try to simplify it yourself. Look at what the symbolic math program had before it generated code. Look at how I reduced your expression to something much simpler and much faster, and how Walter's answer took mine several steps further. There is no magic recipe. If there was a magical recipe, Maple would have applied it and given the answer that Walter gave.
About the specific question
You are doing a lot of addition and subtraction in that calculation. You can get in deep trouble if you have terms that nearly cancel one another. You are wasting a lot of CPU if you have one term that dominates over the others.
Next, you are wasting a lot of CPU by performing repeated calculations. Unless you have enabled
-ffast-math
, which lets the compiler break some of the rules of IEEE floating point, the compiler will not (in fact, must not) simplify that expression for you. It will instead do exactly what you told it to do. At a minimum, you should calculatel1 * l2 * l3
prior to computing that mess.Finally, you are making a lot of calls to
pow
, which is extremely slow. Note that several of those calls are of the form (l1*l2*l3)(1/3). Many of those calls topow
could be performed with a single call tostd::cbrt
:With this,
X * pow(l1 * l2 * l3, 0.1e1 / 0.3e1)
becomesX * l123_pow_1_3
.X * pow(l1 * l2 * l3, -0.1e1 / 0.3e1)
becomesX / l123_pow_1_3
.X * pow(l1 * l2 * l3, 0.4e1 / 0.3e1)
becomesX * l123_pow_4_3
.X * pow(l1 * l2 * l3, -0.4e1 / 0.3e1)
becomesX / l123_pow_4_3
.Maple did miss the obvious.
For example, there's a much easier way to write
Assuming that
l1
,l2
, andl3
are real rather than complex numbers, and that the real cube root (rather than the principle complex root) are to be extracted, the above reduces toor
Using
cbrt_l123
instead ofl123_pow_1_3
, the nasty expression in the question reduces toAlways double check, but always simplify as well.
Here are some of my steps in arriving at the above:
Wrong answer, intentionally kept for humility
Note that this is stricken. It's wrong.
UpdateMaple did miss the obvious. For example, there's a much easier way to write
Assuming that
l1
,l2
, andl3
are real rather than complex numbers, and that the real cube root (rather than the principle complex root) are to be extracted, the above reduces to zero. This calculation of zero is repeated many times over.Second update
If I've done the math right (there is no guarantee that I've done the math right), the nasty expression in the question reduces to
The above assumes that
l1
,l2
, andl3
are positive real numbers.