My hadoop version is 1.0.2. Now I want at most 10 map tasks running at the same time. I have found 2 variable related to this question.
a) mapred.job.map.capacity
but in my hadoop version, this parameter seems abandoned.
b) mapred.jobtracker.taskScheduler.maxRunningTasksPerJob (http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.collector/1.0.2/mapred-default.xml)
I set this variable like below:
Configuration conf = new Configuration();
conf.set("date", date);
conf.set("mapred.job.queue.name", "hadoop");
conf.set("mapred.jobtracker.taskScheduler.maxRunningTasksPerJob", "10");
DistributedCache.createSymlink(conf);
Job job = new Job(conf, "ConstructApkDownload_" + date);
...
The problem is that it doesn't work. There is still more than 50 maps running as the job starts.
After looking through the hadoop document, I can't find another to limit the concurrent running map tasks. Hope someone can help me ,Thanks.
=====================
I hava found the answer about this question, here share to others who may be interested.
Using the fair scheduler, with configuration parameter maxMaps to set the a pool's maximum concurrent task slots, in the Allocation File (fair-scheduler.xml). Then when you submit jobs, just set the job's queue to the according pool.
Read about scheduling jobs in Hadoop(for example "fair scheduler"). you can create a custom queue with to many configuration and then assign your job to that. if you limit your custom queue maximum map task to 10 then each job that assign to queue at most will have 10 concurrent map task.
If you are using Hadoop 2.7 or newer, you can use
mapreduce.job.running.map.limit
andmapreduce.job.running.reduce.limit
to restrict map and reduce tasks at each job level.Fix JIRA ticket.
mapred.tasktracker.map.tasks.maximum is the property to restrict the number of map tasks that can run at a time. Have it configured in your mapred-site.xml.
Refer 2.7 in http://wiki.apache.org/hadoop/FAQ
You can set the value of
mapred.jobtracker.maxtasks.per.job
to something other than -1 (the default). This limits the number of simultaneous map or reduce tasks a job can employ.This variable is described as:
I think there were plans to add
mapred.max.maps.per.node
andmapred.max.reduces.per.node
to job configs, but they never made it to release.The number of mappers fired are decided by the input block size. The input block size is the size of the chunks into which the data is divided and sent to different mappers while it is read from the HDFS. So in order to control the number of mappers we have to control the block size.
It can be controlled by setting the parameters,
mapred.min.split.size
andmapred.max.split.size
, while configuring the job in MapReduce. The value is to be set in bytes. So if we have a 20 GB file, and we want to fire 40 mappers, then we need to set it to 20480 / 40 = 512 MB each. So for that the code would be,where
conf
is an object of theorg.apache.hadoop.conf.Configuration
class.