I am pretty new to artificial intelligence and neural networks. I have implemented a feed-forward neural network in PyTorch for classification on the MNIST data set. Now I want to visualize the receptive fields of (a subset of) hidden neurons. But I am having some problems with understanding the concept of receptive fields and when I google it all results are about CNNs. So can anyone help me with how I could do this in PyTorch and how to interpret the results?
相关问题
- neural network does not learn (loss stays the same
- Trying to understand Pytorch's implementation
- Convolutional Neural Network seems to be randomly
- How to convert Onnx model (.onnx) to Tensorflow (.
- Pytorch Convolutional Autoencoders
相关文章
- how to flatten input in `nn.Sequential` in Pytorch
- How to fix this strange error: “RuntimeError: CUDA
- Looping through training data in Neural Networks B
- Why does this Keras model require over 6GB of memo
- How to measure overfitting when train and validati
- PyTorch: Extract learned weights correctly
- Create image of Neural Network structure
- RuntimeError: Expected object of backend CUDA but
I have previously described the concept of a receptive field for CNNs in this answer, just to give you some context that might be useful in my actual answer.
It seems that you are also struggling with the idea of receptive fields. Generally, you can best understand it by asking the question "which part of the (previous) layer representation is affecting my current input?"
In Convolutional layers, the formula to compute the current layer only takes part of the image as an input (or at least only changes the outcome based on a this subregion). This is precisely what the receptive field is.
Now, a fully connected layer, as the name implies, has a connection from every previous hidden state to every new hidden state, see the image below:
In that case, the receptive field is simply "every previous state" (e.g., in the image, a cell in the first turquoise layer is affected by all yellow cells), which is not very helpful. The whole idea would be to have a smaller subset instead of all available states.
Therefore, I think your question regarding implementations in PyTorch do not really make much sense, unfortunately, but I hope that the answer still provided some clarity on the topic.
As a follow-up, I also encourage you to think about the implications of this "connectedness", especially when it comes to the number of tunable parameters.