i made a convolutional nuralnetwork and i want it to take input pictures and output pictures but when i turn the pictures into tensors they have the wrong dimension :
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [20, 3, 5, 5], but got 3-dimensional input of size [900, 1440, 3] instead
how do i change the dimension of the pictures ? and why does it need to be changed? and how do i make the output an picture? i tryed to use
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
to normilize the img but it didnt change the dimension . here is my nuralnet
def __init__(self):
super(Net, self).__init__()
torch.nn.Module.dump_patches = True
self.conv1 = nn.Conv2d(3, 20, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(20, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 16*5*5)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 )
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
here i get the image and put it into a list:
for i in range(4):
l.append(ImageGrab.grab())
and here is the code that turns the img into an tensor
k=torch.from_numpy(np.asarray(l[1],dtype="int32" ))