I want to do a copy text files from external sources to HDFS. Lets assume that I can combine and split the files based on their size, what should be the size of the text file for best custom Map Reduce job performance. Does size matter ?
相关问题
- Spark on Yarn Container Failure
- enableHiveSupport throws error in java spark code
- spark select and add columns with alias
- Unable to generate jar file for Hadoop
-
hive: cast array
> into map
相关文章
- Java写文件至HDFS失败
- mapreduce count example
- What file sytems support Java UserDefinedFileAttri
- Transactionally writing files in Node.js
- Get file created date in node
- All staged, but uncommitted, files deleted after i
- Is there a way to instantly check whether a direct
- Could you give me any clue Why 'Cannot call me
HDFS is designed to support very large files not small files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.In HDFS architecture there is a concept of blocks. A typical block size used by HDFS is 64 MB. When we place a large file into HDFS it chopped up into 64 MB chunks(based on default configuration of blocks), Suppose you have a file of 1GB and you want to place that file in HDFS, then there will be 1GB/64MB = 16 split/blocks and these block will be distribute across the datanodes The goal of splitting of file is parallel processing and fail over of data. These blocks/chunk will reside on a different DataNode based on your cluster configuration.
How mappers get assigned
Number of mappers is determined by the number of splits of your data in the MapReduce job. In a typical InputFormat, it is directly proportional to the number of files and file sizes. suppose your HDFS block configuration is configured for 64MB(default size) and you have a files with 100MB size then there will be 2 split and it will occupy 2 block and then 2 mapper will get assigned based on the blocks but suppose if you have 2 files with 30MB size(each file) then each file will occupy one block and mapper will get assigned based on that.
So you don't need to split the large file, but If you are dealing with very small files then it worth to combine them.
This link will be helpful to understand the problem with small files.
Please refer below link to get more detail about HDFS design.
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html