I'm trying to deal with 16-bits per channel RGBA TIFF images through C language, I could not find a lot of information about 16-bits images in the specifications.
In case of a 8-bits per channel RGBA image, I understand that a pixel is stored as a uint32, and can be deinterlaced by grouping the 32 bits into 4 groups (R,G,B,A) of 8 bits.
Then to deal with 8-bits per channel RGBA images, I'm doing the following (see also enclosed source code here):
- I store the image data as a uint32 tab (using TIFFReadRGBAImageOriented) that I call
data_tiff
- I deinterlace pixels using the following commands:
(uint8) TIFFGetR(*data_tiff)
, (uint8) TIFFGetG(*data_tiff)
, (uint8) TIFFGetB(*data_tiff)
& (uint8) TIFFGetA(*data_tiff)
In case of a 16 bits per channel RGBA image, could you tell me how can I deinterlace pixels?
if I could retreive image data as a uint64 tab, then I could do the following:
#define TIFF16GetR(abgr) ((abgr) & 0xffff)
#define TIFF16GetG(abgr) (((abgr) >> 16) & 0xffff)
#define TIFF16GetB(abgr) (((abgr) >> 32) & 0xffff)
#define TIFF16GetA(abgr) (((abgr) >> 48) & 0xffff)`
- I read the image data as a uint64 tab
- I deinterlace pixels using
(uint16) TIFF16GetR(*data_tiff)
, (uint16) TIFF16GetG(*data_tiff)
, (uint16) TIFF16GetB(*data_tiff)
& (uint16) TIFF16GetA(*data_tiff)
but it seems that data are not natively stored in a uint64 tab, so I wonder how are interlaced 16-bits per channel images into a uint32 pixel tab.
I'm also facing difficulties dealing with 16-bits grayscaled images in the same way (using TIFFReadRGBAImageOriented
to get image data and trying to convert each pixel into a uint16)
More generally, do you have any piece of documentation about 16 bits grayscale and color images?
Thank you,
Best Regards,
Rémy A.
the TIFFReadRGBAImage
high-level interface will always read the image with a precision of 8 bit per sample.
In order to read a 16bit per channel image without loosing the precision, you could use TIFFReadScanline
directly and read the correct amount of data according to SamplesPerPixel
and BitsPerSample
. But this would only work if the image is stored in strips (not tiles which has been introduced in TIFF 6.0) and there must be only one row in each compressed strip (if the image is compressed).
If you want to handle all kind of TIFF image wihout using TIFFReadRGBAImage
then you have to detect the image format and use low-level interface such as TIFFReadEncodedStrip
and TIFFReadEncodedTile
.
Note that the TIFF specifications are very extensive and flexible and using those low-level interfaces to handle every possible kind of image won't be an easy task so you may be better off using a higher level library than libtiff if you can.
EDIT
What you are referring to in the comment is the first part of the TIFF 6.0 specification known as Baseline TIFF
« When TIFF was introduced, its extensibility provoked compatibility
problems. The flexibility in encoding gave rise to the joke that TIFF
stands for Thousands of Incompatible File Formats.[9] To avoid these
problems, every TIFF reader was required to read Baseline TIFF. The
Baseline TIFF does not include layers, or compression with JPEG or
LZW. The Baseline TIFF is formally known as TIFF 6.0, Part 1: Baseline
TIFF » from Wikipedia
A Baseline TIFF does not support bit depth higher that 8 bit, so that's why in the specification of the Baseline TIFF, the value of BitsPerSample
for a grayscale image can only be 4 or 8 and for a RGB image in can only be 8 bit per channel. Higher bit depth are supported as an extension to the Baseline TIFF specification and it is not required for a TIFF reader to support them.
Tiled Images is also an extension to the Baseline specification where StripOffsets
, StripByteCounts
, and RowsPerStrip
fields is replaced by TileWidth
, TileLength
, TileOffsets
and TileByteCounts
so you can distinguish a tiled image from a stripped image by looking at the existing fields using TIFFGetField()
.