I'm posting this thread because I have some difficulties to deal with pictures in Java. I would like to be able to convert a picture into a byte[] array, and then to be able to do the reverse operation, so I can change the RGB of each pixel, then make a new picture. I want to use this solution because setRGB() and getRGB() of BufferedImage may be too slow for huge pictures (correct me if I'm wrong).
I read some posts here to obtain a byte[] array (such as here) so that each pixel is represented by 3 or 4 cells of the array containing the red, the green and the blue values (with the additional alpha value, when there are 4 cells), which is quite useful and easy to use for me. Here's the code I use to obtain this array (stored in a PixelArray class I've created) :
public PixelArray(BufferedImage image)
{
width = image.getWidth();
height = image.getHeight();
DataBuffer toArray = image.getRaster().getDataBuffer();
array = ((DataBufferByte) toArray).getData();
hasAlphaChannel = image.getAlphaRaster() != null;
}
My big trouble is that I haven't found any efficient method to convert this byte[] array to a new image, if I wanted to transform the picture (for example, remove the blue/green values and only keeping the red one). I tried those solutions :
1) Making a DataBuffer object, then make a SampleModel, to finally create a WritableRaster and then BufferedImage (with additional ColorModel and Hashtable objects). It didn't work because I apparently don't have all the information I need (I have no idea what's the Hashtable for BufferedImage() constructor).
2) Using a ByteArrayInputStream. This didn't work because the byte[] array expected with ByteArrayInputStream has nothing to do with mine : it represents each byte of the file, and not each component of each pixel (with 3-4 bytes for each pixel)...
Could someone help me?
I have tried the approaches mentioned here but for some reason neither of them worked. Using
ByteArrayInputStream
andImageIO.read(...)
returns null, whereasbyte[] array = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
returns a copy of the image data, not a direct reference to them (see also here).However, the following worked for me. Let's suppose that the dimensions and the type of the image data are known. Let also
byte[] srcbuf
be the buffer of the data to be converted intoBufferedImage
. Then,Create a blank image, for example
Convert the data array to
Raster
and usesetData
to fill the image, i.e.Try this:
This method does tend to get out of sync when you try to use the Graphics object of the resulting image. If you need to draw on top of your image, construct a second image (which can be persistant, i.e. not constructed every time but re-used) and
drawImage
the first one onto it.The approach by using ImageIO.read directly is not right in some cases. In my case, the raw byte[] doesn't contain any information about the width and height and format of the image. By only using ImageIO.read, It is impossible for the program to construct a valid image.
It is necessary to pass the basic information of the image to BufferedImage object:
Then set the data for the BufferedImage object by using setRGB or setData. (When using setRGB, it seems we must convert byte[] to int[] first. As a result, it may cause performance issues if the source image data is big. Maybe setData is a better idea for big byte[] typed source data.)
Several people upvoted the comment that the accepted answer is wrong.
If the accepted answer isn't working, it may be because Image.IO doesn't have support for the type of image you're trying (eg tiff).
Add this to your pom (aka put jai-imageio-core-1.3.1.jar in your classpath):
To add support for:
You can check the list of supported formats with:
Note that you only have to drop eg jai-imageio-core-1.3.1.jar into your classpath to make it work.
Other projects that add additional support include: