My optimization task deals with calculation of the following integral and finding the best values of xl
and xu
:
Iterations take too long, so I decided to speed them up by calculating integral for all possible values xl
and xu
and then interpolate calculated values during optimization.
I wrote the following function:
def k_integrand(x, xl, xu):
return((x**2)*mpmath.exp(x))/((xu - xl)*(mpmath.exp(x)-1)**2)
@np.vectorize
def K(xl, xu):
y, err = integrate.quad(k_integrand, xl, xu, args = (xl, xu))
return y
and two identical arrays grid_xl
and grid_xu
with dinamic increment of values.
When I run the code I get this:
K(grid_xl, grid_xu)
Traceback (most recent call last):
File "<ipython-input-2-5b9df02f12b7>", line 1, in <module>
K(grid_xl, grid_xu)
File "C:/Users/909404/OneDrive/Работа/ZnS-FeS/Теплоемкость/Python/CD357/4 - Optimization CD357 interpolation.py", line 75, in K
y, err = integrate.quad(k_integrand, xl, xu, args = (xl, xu))
File "C:\Users\909404\Anaconda3\lib\site-packages\scipy\integrate\quadpack.py", line 323, in quad
points)
File "C:\Users\909404\Anaconda3\lib\site-packages\scipy\integrate\quadpack.py", line 372, in _quad
if (b != Inf and a != -Inf):
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I guess it comes from the fact that xl
should be always less than xu
.
Is there any way to compare the values of xl
and xu
and return NaN in case if xl>=xu
?
In the end I want to have something like this:
And to have the ability to use interpolation.
Maybe I have chosen the wrong way? I'd appreciate any help.
Using a low-level callback function for integration
The following answer is meant as a comment to @Andras Deak answer, but is to long for a comment.
The scipy integrate functions call the k_integrand_np function multiple times, which is quite slow. The alternative of using a pure Python function is to use a low-level callback function. This function can be written directly in C or in Python using a compiler like Numba. The following is a slightly modified version of this answer.
Example
Performance
I can't reproduce your error unless I omit the
np.vectorize
decorator. Settingxl
/xu
values that coincide does give me aZeroDivisionError
though.Anyway, there's nothing stopping you from checking the values of
xu
vsxl
in your higher-level function. That way you can skip integration entirely for nonsensical data points and returnnp.nan
early:With these definitions I get (following
np.set_printoptions(linewidth=200)
for easier comparison:You can see that the values perfectly agree with your linked image.
Now, I've got bad news and good news. The bad news is that while
np.vectorize
provides syntactical sugar around calling your scalar integration function with array inputs, it won't actually give you speed-up compared to a native for loop. The good news is that you can replace the calls tompmath.exp
with calls tonp.exp
and you'll end up with the same result much faster:With these definitions
So the two methods give the same result (exactly!), but the numpy version is almost 15 times faster.