I have a Mathematica code where I have to evaluate numerically thousands of integrals similar to this one
NIntegrate[
(Pi*Cos[(Pi*(-2*x + y))/(1 + y)] + (1 + y)*(-Sin[(2*Pi*x)/(1 + y)] +
Sin[(Pi*(-2*x + y))/(1 + y)]))/(E^x*(1 + y)),
{x, 0, 100}, {y, 0, 100}
] //AbsoluteTiming
The integrand is a nice absolutely integrable function without singularities, which decays exponentially in one direction and as 1/y^3 in the other direction.
The NIntegrate
command was working fine in Mathematica 7, but in the newest version 8.0.4 it slows down by two orders of magnitude. I assume in the new version it tries to better control the error, but at the expense of this tremendous increase in time. Are there some settings I could use so that the computation proceeds with the same speed as in Mathematica 7?
ruebenko's answer and the comments from user1091201 and Leonid together combine to give the right answers.
The Edit 1 answer by ruebenko is the right first answer for general situations like this, that is, add the option Method -> {"SymbolicPreprocessing", "OscillatorySelection" -> False}
:
expr = (Pi*
Cos[(Pi*(-2*x + y))/(1 + y)] + (1 + y)*(-Sin[(2*Pi*x)/(1 + y)] +
Sin[(Pi*(-2*x + y))/(1 + y)]))/(E^x*(1 + y));
NIntegrate[expr, {x, 0, 100}, {y, 0, 100},
Method -> {"SymbolicPreprocessing",
"OscillatorySelection" -> False}] // AbsoluteTiming
And user1091201's comment suggesting Method -> "GaussKronrodRule"
is close to the fastest possible answer for this specific problem.
I'll describe what is happening in NIntegrate in this specific example and along the way give some tips on handling apparently similar situations in general.
Method Selection
In this example, NIntegrate examines expr
, comes to the conclusion that multidimensional "LevinRule" is the best method for this integrand, and applies it. However, for this particular example, "LevinRule" is slower than "MultidimensionalRule" (though "LevinRule" gets a more satisfactory error estimate). "LevinRule" is also slower than any of several Gauss-type one-dimensional rules iterated over the two dimensions, such as "GaussKronrodRule" which user1091201 found.
NIntegrate makes its decision after performing some symbolic analysis of the integrand. There are several types of symbolic pre-processing applied; the setting Method -> {"SymbolicPreprocessing", "OscillatorySelection" -> False}
disables one type of symbolic pre-processing.
Essentially, with "OscillatorySelection" enabled, NIntegrate selects "LevinRule". With "OscillatorySelection" disabled, NIntegrate selects "MultidimensionalRule", which is faster for this integral, although we may distrust the result based the message NIntegrate::slwcon which indicates unusually slow convergence was observed.
This is the part where Mathematica 8 differs from Mathematica 7: Mathematica 8 adds "LevinRule" and associated method selection heuristics into "OscillatorySelection".
Aside from causing NIntegrate to select a different method, disabling "OscillatorySelection" also saves the time spent doing the actual symbolic processing, which can be significant in some cases.
Setting Method -> "GaussKronrodRule"
overrides and skips the symbolic processing associated with method selection, and instead uses the 2-D cartesian product rule Method -> {"CartesianRule", Method -> {"GaussKronrodRule", "GaussKronrodRule"}}
. This happens to be a very fast method for this integral.
Other Symbolic Processing
Both ruebenko's Method -> {"SymbolicPreprocessing", "OscillatorySelection" -> False}
and user1091201's Method -> "GaussKronrodRule"
do not disable other forms of symbolic processing, and this is generally a good thing. See this part of the NIntegrate advanced documentation for a list of types of symbolic preprocessing that may be applied. In particular, "SymbolicPiecewiseSubdivision" is very valuable for integrands that are non-analytic at several points due to the presence of piecewise functions.
To disable all symbolic processing and get only default methods with default method options, use Method -> {Automatic, "SymbolicProcessing" -> 0}
. For one-dimensional integrals this currently amounts to Method -> {"GlobalAdaptive", Method -> "GaussKronrodRule"}
with certain default settings for all parameters of those methods (number of interpolation points in the rule, type of singularity handling for the global-adaptive strategy, etc). For multi-dimensional integrals, it currently amounts to Method -> {"GlobalAdaptive", Method -> "MultidimensionalRule"}
, again with certain default parameter values. For high-dimensional integrals, a monte-carlo method will be used.
I don't recommend switching straight to Method -> {Automatic, "SymbolicProcessing" -> 0}
as a first optimization step for NIntegrate, but it can be useful in some cases.
Fastest method
There is just about always some way to speed up a numerical integration at least a bit, sometimes a lot, since there are so many parameters that are chosen heuristically that you may benefit from tweaking. (Look at the different options and parameters that the "LevinRule" method or the "GlobalAdaptive" strategy has, including all their sub-methods etc.)
That said, here is the fastest method I found for this particular integral:
NIntegrate[(Pi*
Cos[(Pi*(-2*x + y))/(1 + y)] + (1 + y)*(-Sin[(2*Pi*x)/(1 + y)] +
Sin[(Pi*(-2*x + y))/(1 + y)]))/(E^x*(1 + y)), {x, 0,
100}, {y, 0, 100},
Method -> {"GlobalAdaptive", Method -> "GaussKronrodRule",
"SingularityDepth" -> Infinity}] // AbsoluteTiming
(The setting "SingularityDepth" -> Infinity
disables singularity handling transformations.)
Integration range
By the way, is your desired integration range really {x, 0, 100}, {y, 0, 100}
, or is {x, 0, Infinity}, {y, 0, Infinity}
the true desired integration range for your application?
If you really require {x, 0, Infinity}, {y, 0, Infinity}
, I suggest using that instead. For infinite-length integration ranges, NIntegrate compactifies the integrand to a finite range, effectively samples it in a geometrically-spaced way. This is usually much more efficient than linear (evenly) spaced samples that are used for finite integration ranges.
Here is a workaround:
NIntegrate[(Pi*
Cos[(Pi*(-2*x + y))/(1 + y)] + (1 + y)*(-Sin[(2*Pi*x)/(1 + y)] +
Sin[(Pi*(-2*x + y))/(1 + y)]))/(E^x*(1 + y)), {x, 0,
100}, {y, 0, 100},
Method -> "AdaptiveMonteCarlo"] // AbsoluteTiming
You can also use ParallelTry to tests various methods in parallel.
Slowdowns for specific arguments can happen when new methods are implemented or heuristics are modified. Those may help to solve a new class of problems at the expense that some others get slower. One would have to investigate exactly what is going on here.
You might want to change the topic of your question - it indicates that all integrals evaluate slower in V8 which is not true.
Edit 1:
I think it get's stuck in LevinRule (new in V8 for oscillatory integrands) so, I think, this should switch that off.
NIntegrate[(Pi*
Cos[(Pi*(-2*x + y))/(1 + y)] + (1 + y)*(-Sin[(2*Pi*x)/(1 + y)] +
Sin[(Pi*(-2*x + y))/(1 + y)]))/(E^x*(1 + y)), {x, 0,
100}, {y, 0, 100},
Method -> {"SymbolicPreprocessing",
"OscillatorySelection" -> False}] // AbsoluteTiming
For this particular integral, the main culprit seems to be the integration over x
, probably due to the presence of both fast-decaying and highly-oscillating terms. Also, for this case, one can do the integration over x
analytically:
In[92]:=
-Integrate[(Pi*Cos[(Pi*(-2*x+y))/(1+y)]+(1+y)*(-Sin[(2*Pi*x)/(1+y)]+Sin[(Pi*(-2*x+y))/(1+y)]))/
(E^x* (1+y)),x]/.x->0//FullSimplify
Out[92]= (-\[Pi] (1+y) (2+Cos[(\[Pi] y)/(1+y)])+(2 \[Pi]^2+(1+y)^2) Sin[(\[Pi] y)/(1+y)])/
(4 \[Pi]^2+(1+y)^2)
(I discarded the value on the upper limit, since it is uniformly very small for y
). One can then integrate over y
numerically to get the right result:
In[94]:= NIntegrate[%,{y,0,100}]
Out[94]= 1.17525
A more general workaround would be to split the x
and y
integration like so:
ClearAll[f];
f[y_?NumericQ, xrange : {_?NumericQ, _?NumericQ}] :=
NIntegrate[(Pi*
Cos[(Pi*(-2*x + y))/(1 + y)] + (1 +
y)*(-Sin[(2*Pi*x)/(1 + y)] +
Sin[(Pi*(-2*x + y))/(1 + y)]))/(E^x*(1 + y)), {x, First@xrange, Last@xrange}];
and then we have:
In[105]:= NIntegrate[f[y,{0,100}],{y,0,100}]//Timing
Out[105]= {2.578,1.17525}
which is not blazing fast, but reasonably fast. This procedure won't always work so well (since the 2D integration grid resulting from this procedure won't always be optimal), but should work well enough when the integrand is such that integrations over x
and y
are sufficiently "decoupled".