I wanted to convert an RGB image to grayscale manually without library usage in tensorflow. So I wrote the following...
import tensorflow as tf
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# First, load the image again
filename = "MarshOrchid.jpg"
raw_image_data = mpimg.imread(filename)
image = tf.placeholder("float", [None, None, 3])
slice = tf.slice(image,[0,0,0],[-1,-1,1])
with tf.Session() as session:
result = session.run(slice, feed_dict={image: raw_image_data})
plt.imshow(result)
plt.show()
I extracted the first channel of the image for the conversion. But this generates error while using imread saying
TypeError: Invalid dimensions for image data
What should I do?
From the doc of plt.imshow(X):
Here you have an input of shape [None, None, 1]. You just need to remove the last dimension like this: