I have a network (https://github.com/TheAbhiKumar/tensorflow-value-iteration-networks) that I am trying to implement in pytorch (I'm very new to pytorch, however, not at all new to machine learning).
In short, I cannot seem to figure out how to implement "pure" convolution in pytorch. In tensorflow it could be accomplished like this:
def conv2d_flipkernel(x, k, name=None):
return tf.nn.conv2d(x, flipkernel(k), name=name,
strides=(1, 1, 1, 1), padding='SAME')
With the flipkernel function being:
def flipkernel(kern):
return kern[(slice(None, None, -1),) * 2 + (slice(None), slice(None))]
How could something similar be done in pytorch?
TLDR Use the convolution from the functional toolbox,
torch.nn.fuctional.conv2d
, nottorch.nn.conv2d
, and flip your filter around the vertical and horizontal axis.torch.nn.conv2d
is a convolutional layer for a network. Because weights are learned, it does not matter if it is implemented using cross-correlation, because the network will simply learn a mirrored version of the kernel (Thanks @etarion for this clarification).torch.nn.fuctional.conv2d
performs convolution with the inputs and weights provided as arguments, similar to the tensorflow function in your example. I wrote a simple test to determine whether, like the tensorflow function, it is actually performing cross-correlation and it is necessary to flip the filter for correct convolutional results.This outputs
This output is the result for cross-correlation. Therefore, we need to flip the filter
The new output is the correct result for convolution.
Nothing too different from the answer above, but
torch
can doflip(i)
natively (and I guess you only wanted toflip(2)
andflip(3)
):