Improve min/max downsampling

2020-06-17 05:45发布

问题:

I have some large arrays (~100 million points) that I need to interactively plot. I am currenlty using Matplotlib. Plotting the arrays as-is gets very slow and is a waste since you can't visualize that many points anyway.

So I made a min/max decimation function that I tied to the 'xlim_changed' callback of the axis. I went with a min/max approach because the data contains fast spikes that I do not want to miss by just stepping through the data. There are more wrappers that crop to the x-limits, and skip processing under certain conditions but the relevant part is below:

def min_max_downsample(x,y,num_bins):
    """ Break the data into num_bins and returns min/max for each bin"""
    pts_per_bin = x.size // num_bins    

    #Create temp to hold the reshaped & slightly cropped y
    y_temp = y[:num_bins*pts_per_bin].reshape((num_bins, pts_per_bin))
    y_out      = np.empty((num_bins,2))
    #Take the min/max by rows.
    y_out[:,0] = y_temp.max(axis=1)
    y_out[:,1] = y_temp.min(axis=1)
    y_out = y_out.ravel()

    #This duplicates the x-value for each min/max y-pair
    x_out = np.empty((num_bins,2))
    x_out[:] = x[:num_bins*pts_per_bin:pts_per_bin,np.newaxis]
    x_out = x_out.ravel()
    return x_out, y_out

This works pretty well and is sufficiently fast (~80ms on 1e8 points & 2k bins). There is very little lag as it periodically recalculates & updates the line's x & y-data.

However, my only complaint is in the x-data. This code duplicates the x-value of each bin's left edge and doesn't return the true x-location of the y min/max pairs. I typically set the number of bins to double the axis pixel width. So you can't really see the difference because the bins are so small...but I know its there... and it bugs me.

So attempt number 2 which does return the actual x-values for every min/max pair. However it is about 5x slower.

def min_max_downsample_v2(x,y,num_bins):
    pts_per_bin = x.size // num_bins
    #Create temp to hold the reshaped & slightly cropped y
    y_temp = y[:num_bins*pts_per_bin].reshape((num_bins, pts_per_bin))
    #use argmax/min to get column locations
    cc_max = y_temp.argmax(axis=1)
    cc_min = y_temp.argmin(axis=1)    
    rr = np.arange(0,num_bins)
    #compute the flat index to where these are
    flat_max = cc_max + rr*pts_per_bin
    flat_min = cc_min + rr*pts_per_bin
    #Create a boolean mask of these locations
    mm_mask  = np.full((x.size,), False)
    mm_mask[flat_max] = True
    mm_mask[flat_min] = True  
    x_out = x[mm_mask]    
    y_out = y[mm_mask]  
    return x_out, y_out

This takes roughly 400+ ms on my machine which becomes pretty noticeable. So my question is basically is there a way to go faster and provide the same results? The bottleneck is mostly in the numpy.argmin and numpy.argmax functions which are a good bit slower than numpy.min and numpy.max.

The answer might be to just live with version #1 since it visually doesn't really matter. Or maybe try to speed it up something like cython (which I have never used).

FYI using Python 3.6.4 on Windows ... example usage would be something like this:

x_big = np.linspace(0,10,100000000)
y_big = np.cos(x_big )
x_small, y_small = min_max_downsample(x_big ,y_big ,2000) #Fast but not exactly correct.
x_small, y_small = min_max_downsample_v2(x_big ,y_big ,2000) #correct but not exactly fast.

回答1:

I managed to get an improved performance by using the output of arg(min|max) directly to index the data arrays. This comes at the cost of an extra call to np.sort but the axis to be sorted has only two elements (the min. / max. indices) and the overall array is rather small (number of bins):

def min_max_downsample_v3(x, y, num_bins):
    pts_per_bin = x.size // num_bins

    x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    i_min = np.argmin(y_view, axis=1)
    i_max = np.argmax(y_view, axis=1)

    r_index = np.repeat(np.arange(num_bins), 2)
    c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()

    return x_view[r_index, c_index], y_view[r_index, c_index]

I checked the timings for your example and I obtained:

  • min_max_downsample_v1: 110 ms ± 5 ms
  • min_max_downsample_v2: 240 ms ± 8.01 ms
  • min_max_downsample_v3: 164 ms ± 1.23 ms

I also checked returning directly after the calls to arg(min|max) and the result was equally 164 ms, i.e. there's no real overhead after that anymore.



回答2:

So this doesn't address speeding up the specific function in question, but it does show a few ways of plotting a line with a large number of points somewhat effectively. This assumes that the x points are ordered and uniformly (or close to uniformly) sampled.

Setup

from pylab import *

Here's a function I like that reduces the number of points by randomly choosing one in each interval. It isn't guaranteed to show every peak in the data, but it doesn't have as many problems as directly decimating the data, and is fast.

def calc_rand(y, factor):
    split = y[:len(y)//factor*factor].reshape(-1, factor)
    idx = randint(0, split.shape[-1], split.shape[0])
    return split[arange(split.shape[0]), idx]

And here's the min and max to see the signal envelope

def calc_env(y, factor):
    """
    y : 1D signal
    factor : amount to reduce y by (actually returns twice this for min and max)
    Calculate envelope (interleaved min and max points) for y
    """
    split = y[:len(y)//factor*factor].reshape(-1, factor)
    upper = split.max(axis=-1)
    lower = split.min(axis=-1)
    return c_[upper, lower].flatten()

The following function can take either of these, and uses them to reduce the data being drawn. The number of points actually taken is 5000 by default, which should far exceed a monitor's resolution. Data is cached after it's reduced. Memory may be an issue, especially with large amounts of data, but it shouldn't exceed the amount required by the original signal.

def plot_bigly(x, y, *, ax=None, M=5000, red=calc_env, **kwargs):
    """
    x : the x data
    y : the y data
    ax : axis to plot on
    M : The maximum number of line points to display at any given time
    kwargs : passed to line
    """
    assert x.shape == y.shape, "x and y data must have same shape!"
    if ax is None:
        ax = gca()

    cached = {}

    # Setup line to be drawn beforehand, note this doesn't increment line properties so
    #  style needs to be passed in explicitly
    line = plt.Line2D([],[], **kwargs)
    def update(xmin, xmax):
        """
        Update line data

        precomputes and caches entire line at each level, so initial
        display may be slow but panning and zooming should speed up after that
        """
        # Find nearest power of two as a factor to downsample by
        imin = max(np.searchsorted(x, xmin)-1, 0)
        imax = min(np.searchsorted(x, xmax) + 1, y.shape[0])
        L = imax - imin + 1
        factor = max(2**int(round(np.log(L/M) / np.log(2))), 1)

        # only calculate reduction if it hasn't been cached, do reduction using nearest cached version if possible
        if factor not in cached:
            cached[factor] = red(y, factor=factor)

        ## Make sure lengths match correctly here, by ensuring at least
        #   "factor" points for each x point, then matching y length
        #  this assumes x has uniform sample spacing - but could be modified
        newx = x[imin:imin + ((imax-imin)//factor)* factor:factor]
        start = imin//factor
        newy = cached[factor][start:start + newx.shape[-1]]
        assert newx.shape == newy.shape, "decimation error {}/{}!".format(newx.shape, newy.shape)

        ## Update line data
        line.set_xdata(newx)
        line.set_ydata(newy)

    update(x[0], x[-1])
    ax.add_line(line)
    ## Manually update limits of axis, as adding line doesn't do this
    #   if drawing multiple lines this can quickly slow things down, and some
    #   sort of check should be included to prevent unnecessary changes in limits
    #   when a line is first drawn.
    ax.set_xlim(min(ax.get_xlim()[0], x[0]), max(ax.get_xlim()[1], x[1]))
    ax.set_ylim(min(ax.get_ylim()[0], np.min(y)), max(ax.get_ylim()[1], np.max(y)))

    def callback(*ignore):
        lims = ax.get_xlim()
        update(*lims)

    ax.callbacks.connect('xlim_changed', callback)

    return [line]

Here's some test code

L=int(100e6)
x=linspace(0,1,L)
y=0.1*randn(L)+sin(2*pi*18*x)
plot_bigly(x,y, red=calc_env)

On my machine this displays very quickly. Zooming has a bit of lag, especially when it's by a large amount. Panning has no issues. Using random selection instead of the min and max is quite a bit faster, and only has issues on very high levels of zoom.



回答3:

EDIT: Added parallel=True to numba ... even faster

I ended up making a hybrid of a single pass argmin+max routine and the improved indexing from @a_guest's answer and link to this related simultaneous min max question.

This version returns the correct x-values for each min/max y pair and thanks to numba is a actually a little faster than the "fast but not quite correct" version.

from numba import jit, prange
@jit(parallel=True)
def min_max_downsample_v4(x, y, num_bins):
    pts_per_bin = x.size // num_bins
    x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)    
    i_min = np.zeros(num_bins,dtype='int64')
    i_max = np.zeros(num_bins,dtype='int64')

    for r in prange(num_bins):
        min_val = y_view[r,0]
        max_val = y_view[r,0]
        for c in range(pts_per_bin):
            if y_view[r,c] < min_val:
                min_val = y_view[r,c]
                i_min[r] = c
            elif y_view[r,c] > max_val:
                max_val = y_view[r,c]
                i_max[r] = c                
    r_index = np.repeat(np.arange(num_bins), 2)
    c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()        
    return x_view[r_index, c_index], y_view[r_index, c_index]

Comparing the speeds using timeit shows the numba code is roughly 2.6x faster and providing better results that v1. It is a little over 10x faster than doing numpy's argmin & argmax in series.

%timeit min_max_downsample_v1(x_big ,y_big ,2000)
96 ms ± 2.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit min_max_downsample_v2(x_big ,y_big ,2000)
507 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit min_max_downsample_v3(x_big ,y_big ,2000)
365 ms ± 1.27 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit min_max_downsample_v4(x_big ,y_big ,2000)
36.2 ms ± 487 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)


回答4:

Did you try pyqtgraph for interactive plotting? It's more responsive than matplotlib.

One trick I use for downsampling is to use array_split and compute min and max for the splits. The split is done according to the number of samples per pixel of the plot area.