Want to just confirm on following. Please verify if this is correct: 1. As per my understanding when we copy a file into HDFS, that is the point when file (assuming its size > 64MB = HDFS block size) is split into multiple chunks and each chunk is stored on different data-nodes.
File contents are already split into chunks when file is copied into HDFS and that file-split does not happen at the time of running map job. Map tasks are only scheduled in such a way that they work on each chunk of max. size 64 MB with data-locality (i.e. map task runs on that node which contains the data/chunk)
File splitting also happens if file is compressed (gzipped) but MR ensures that each file is processed by just one mapper, i.e. MR will collect all the chunks of gzip file lying at other data nodes and give them all to the single mapper.
Same thing as above will happen if we define isSplitable() to return false, i.e. all the chunks of a file will be processed by one mapper running on one machine. MR will read all the chunks of a file from different data-nodes and make them available to a single mapper.
Yes, file contents are split into chunks when the file is copied into the HDFS. The block size is configurable, and if it is say 128 MB, then whole 128 MB would be one block, not 2 blocks of 64 MB separately.Also it is not necessary that each chunk of a file is stored on a separate datanode.A datanode may have more than one chunk of a particular file.And a particular chunk may be present in more than one datanodes based upon the replication factor.
Your understanding is not ideal. I would point out that there are two, almost independent processes: splitting files into HDFS blocks, and splitting files for processing by the different mappers.
HDFS split files into blocks based on the defined block size.
Each input format has its own logic how files can be split into part for the independent processing by different mappers. Default logic of the FileInputFormat is to split file by HDFS blocks. You can implement any other logic
Compression, usually is a foe of the splitting, so we employ block compression technique to enable splitting of the compressed data. It means that each logical part of the file (block) is compressed independently.
David's answer pretty much hits the nail on its head, i am just elaborating on it here.
There are two distinct concepts at work here, each concept is handled by a different entity in the hadoop framework
Firstly --
1) Dividing a file into blocks -- When a file is written into HDFS, HDFS divides the file into blocks and takes care of its replication. This is done once (mostly), and then is available to all MR jobs running on the cluster. This is a cluster wide configuration
Secondly --
2) Splitting a file into input splits -- When input path is passed into a MR job, the MR job uses the path along with the input format configured to divide the files specified in the input path into splits, each split is processed by a map task. Calculation of input splits is done by the input format each time a job is executed
Now once we have this under our belt, we can understand that isSplitable() method comes under the second category.
To really nail this down have a look at the HDFS write data flow (Concept 1)
The second point in the diagram is probably where the split happens, note that this has nothing to do with running of a MR Job
Now have a look at the execution steps of a MR job
Here the first step is the calculation of the input splits via the inputformat configured for the job.
A lot of your confusion stems from the fact that you are clubbing both of these concepts, i hope this makes it a little clearer.