From a list of 2D coordinates, and a third variable (velocity), I have created a 2D numpy array covering the whole sampled area. My intention is to create an image, in which each pixel contains the mean velocity of the points lying within it. After that filter that image with a gaussian filter.
The problem is that the area is not uniformly sampled. Therefore I have several pixels without information (Nan
) in the middle of the image. When I try to filter the array through a gaussian filter, the Nan
propagate ruining the whole image.
I need to filter this image, but rejecting all pixels without information. In other words, If a pixel does not contain information, then it should be not taken into account for the filtering.
Here is an example of my code for averaging:
Mean_V = np.zeros([len(x_bins), len(y_bins)])
for i, x_bin in enumerate(x_bins[:-1]):
bin_x = (x > x_bins[i]) & (x <= x_bins[i+1])
for j, y_bin in enumerate(y_bins[:-1]):
bin_xy = (y[bin_x] > y_bins[j]) & (y[bin_x] <= y_bins[j+1])
if (sum(x > 0 for x in bin_xy) > 0) :
Mean_V[i,j]=np.mean(V[bin_x][bin_xy])
else:
Mean_V[i,j]=np.nan
EDIT:
Surfing the web I have ended into this question I made in 2013. The solution to this problem can be found in the astropy library:
http://docs.astropy.org/en/stable/convolution/
Astropy's convolution replaces the NaN pixels with a kernel-weighted interpolation from their neighbors.
Thanks folks!!
How about replacing Z=VV/WW with Z=VV/(WW+epsilon) with epsilon=0.000001 to automatically handle the cases without any observations in the previous proposal
The simplest thing would be to turn
nan
s into zeros vianan_to_num
. Whether this is meaningful or not is a separate question.in words:
A Gaussian filter which ignores NaNs in a given array U can be easily obtained by applying a standard Gaussian filter to two auxiliary arrays V and W and by taking the ratio of the two to get the result Z.
Here, V is copy of the original U with NaNs replaced by zeros and W is an array of ones with zeros indicating the positions of NaNs in the original U.
The idea is that replacing the NaNs by zeros introduces an error in the filtered array which can, however, be compensated by applying the same Gaussian filter to another auxiliary array and combining the two.
in Python:
in numbers:
Here coefficients of the Gaussian filter are set to [0.25,0.50,0.25] for demonstration purposes and they sum up to one 0.25+0.50+0.25=1, without loss of generality.
After replacing the NaNs by zeros and applying the Gaussian filter (cf. VV below) it is clear that the zeros introduce an error, i.e., due to the "missing" data the coefficients 0.25+0.50=0.75 do not sum up to one anymore and therefore underestimate the "true" value.
However, this can be compensated by using the second auxiliary array (cf. WW below) which, after filtering with the same Gaussian, just contains the sum of coefficients.
Therefore, dividing the two filtered auxiliary arrays rescales the coefficients such that they sum up to one while the NaN positions are ignored.