I would like to know if converting an image to gray scale is necessary step for all image pre processing techniques. I am using a neural network for face recognition. Is it really necessary for converting it into a gray scale or can I give color images also as input to neural networks?
相关问题
- Views base64 encoded blob in HTML with PHP
- How to get the background from multiple images by
- Try to load image with Highgui.imread (OpenCV + An
- CV2 Image Error: error: (-215:Assertion failed) !s
- How do I apply a perspective transform with more t
相关文章
- how to flatten input in `nn.Sequential` in Pytorch
- Use savefig in Python with string and iterative in
- Where does this quality loss on Images come from?
- Specifying image dimensions in HTML vs CSS for pag
- How to insert pictures into each individual bar in
- How do I append metadata to an image in Matlab?
- Img url to dataurl using JavaScript
- Click an image, get coordinates
No, it is not required, it simplifies things, so it is an often practice to do so, but in general you could work directly on the color image, in any representation (RGB, CMYK) by simply using more dimensions (or more complex similarity/distance measure/kernel).
Converting to gray scale is not necessary for image processing, but is usually done for a few reasons:
However, it's important to understand that while there are many advantages of converting to gray scale, it is not always desirable. When you convert to gray scale you not only reduce the quantity of image data, but you also lose information (e.g., color information). For many image processing applications color is very important, and converting to gray scale can worsen results.
To summarize: If converting to gray scale still yields reasonable results for whatever application you're working on, it is probably desirable, especially due to the likely reduction in processing time. However it comes at the cost of throwing away data (color data) that may be very helpful or required for many image processing applications.