I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this:
import numpy as np
def numberOfNonNans(data):
count = 0
for i in data:
if not np.isnan(i):
count += 1
return count
Is there a built-in function for this in numpy? Efficiency is important because I'm doing Big Data analysis.
Thnx for any help!
An alternative, but a bit slower alternative is to do it over indexing.
The double use of
np.isnan(data)
and the==
operator might be a bit overkill and so I posted the answer only for completeness.Quick-to-write alterantive
Even though is not the fastest choice, if performance is not an issue you can use:
sum(~np.isnan(data))
.Performance:
~
inverts the boolean matrix returned fromnp.isnan
.np.count_nonzero
counts values that is not 0\false..sum
should give the same result. But maybe more clearly to usecount_nonzero
Testing speed:
data.size - np.count_nonzero(np.isnan(data))
seems to barely be the fastest here. other data might give different relative speed results.To determine if the array is sparse, it may help to get a proportion of nan values
If that proportion exceeds a threshold, then use a sparse array, e.g. - https://sparse.pydata.org/en/latest/