I want to be able to read / write images on an hdfs file system and take advantage of the hdfs locality.
I have a collection of images where each image is composed of
- 2D arrays of uint16
- basic additional information stored as an xml file.
I want to create an archive over hdfs file system, and use spark for analyzing the archive. Right now I am struggling over the best way to store the data over hdfs file system in order to be able to take full advantage of spark+hdfs structure.
From what I understand, the best way would be to create a sequenceFile wrapper. I have two questions :
- Is creating a sequenceFile wrapper the best way ?
- Does anybody have any pointer to examples I could use to start with ? I must not be first one that needs to read something different than text file on hdfs through spark !
I have found a solution that works : using the pyspark 1.2.0 binaryfile does the job. It is flagged as experimental, but I was able to read tiff images with a proper combination of openCV.
Note the help of pyspark :