I've heard that if you need to do a color segmentation on your software (create a binary image from a colored image by setting pixels to 1 if they meet certain threshold rules like R<100, G>100, 10< B < 123) it is better to first convert your image to HSV. Is this really true? And why?
相关问题
- Plotting CMYK color space as 3D color solid
- How to get the background from multiple images by
- DBGrid - How to set an individual background color
- How to get the bounding box of text that are overl
- Create depth map from 3d points
相关文章
- Emacs/xterm color annoyance on Linux
- matplotlib bwr-colormap, always centered on zero
- MeshLab: How to import XYZRGB file
- how to calculate field of view of the camera from
- Fastest way to compute image dataset channel wise
- ChartJS. Change axis line color
- set foreground color in FrameLayout in android pro
- Changing background color for a text annotation to
The big reason is that it separates color information (chroma) from intensity or lighting (luma). Because value is separated, you can construct a histogram or thresholding rules using only saturation and hue. This in theory will work regardless of lighting changes in the value channel. In practice it is just a nice improvement. Even by singling out only the hue you still have a very meaningful representation of the base color that will likely work much better than RGB. The end result is a more robust color thresholding over simpler parameters.
Hue is a continuous representation of color so that 0 and 360 are the same hue which gives you more flexibility with the buckets you use in a histogram. Geometrically you can picture the HSV color space as a cone or cylinder with H being the degree, saturation being the radius, and value being the height. See the HSV wikipedia page.