Pure NumPy 2D mean convolution derivative of input

2019-01-29 07:21发布

问题:

I have b 2d m x n greyscale images that I'm convolving with a p x q filter and then doing mean-pooling on. With pure numpy, I'd like to compute the derivative of the input image and the filter, but I'm having trouble computing the derivative of the input image:

def conv2d_derivatives(x, f, dy):
    """
    dimensions:
        b = batch size
        m = input image height
        n = input image width
        p = filter height
        q = filter width
        r = output height
        s = output width

    input:
        x = input image                       (b x m x n)
        f = filter                            (p x q)
        dy = derivative of some loss w.r.t. y (b x r x s)

    output:
        df = derivative of loss w.r.t. f      (p x q)
        dx = derivative of loss w.r.t. x      (b x m x n)

    notes:
        wx = windowed version of x s.t. wx[b, r, s] = the window of x to compute y[b, r, s]
        vdx = a view of dx 
    """
    b, m, n = x.shape
    p, q = f.shape
    r = m - p + 1
    s = n - q + 1
    wx = as_strided(x, (b, r, s, p, q), np.array([m * n, 1, q, 1, n]) * x.itemsize)

    # This derivative is correct
    df = 1 / (p * q) * np.einsum('brspq,brs->pq', wx, dy)

    # Method 1: this derivative is incorrect
    dx = np.zeros_like(x)
    vdx = as_strided(dx, (b, r, s, p, q), np.array([m * n, 1, q, 1, n]) * dx.itemsize)
    np.einsum('pq,brs->brspq', f, dy, out=vdx)
    dx /= (p * q)

    # Method 2: this derivative is correct, but it's slow and memory-intensive
    dx = np.zeros_like(x)
    vdx = as_strided(dx, (b, r, s, p, q), np.array([m * n, 1, q, 1, n]) * dx.itemsize)
    prod = f[None, None, None, :, :] * dy[:, :, :, None, None]
    for index in np.ndindex(*vdx.shape):
        vdx[index] += prod[index]
    dx /= (p * q)

    return df, dx

I know that the derivative of the loss w.r.t. w[b,r,s,p,q] is just 1/(p*q) * f[p,q] * dy[b,r,s]. However, I don't want to explicitly compute the derivatives for w and store them in memory because that array would be massive.

I thought I could do an einsum of a view of dx, vdx, similar to the windowed wdx, and hope that einsum would increment vdx[b,r,s,p,q] += f[p,q] * dy[b,r,s], but it actually assigns vdx[b,r,s,p,q] = f[p,q] * dy[b,r,s]. If there was a way to specify out_add_to in einsum, then my problem would be solved.

How do I compute dx without storing a large b x r x s x p x q matrix in pure NumPy? I can't use scipy or any other dependency for this problem.