I want to share large in memory static data(RAM lucene index) for my map tasks in Hadoop? Is there way for several map/reduce tasks to share same JVM?
相关问题
- Spark on Yarn Container Failure
- JCR-SQL - contains function doesn't escape spe
- JVM abnormal exit - cleanup of system resources
- enableHiveSupport throws error in java spark code
- spark select and add columns with alias
相关文章
- java项目突然挂掉,日志无报错信息
- Java写文件至HDFS失败
- Java引用分为强引用、软引用、弱引用、虚引用,我怎么能判断出一个对象是哪个引用?
- mapreduce count example
- java.lang.VerifyError: Stack map does not match th
- Solr - _version_ field must exist in schema and be
- Do errors thrown within UncaughtExceptionHandler g
- What is the up-front cost of an object being final
Shameless plug
I go over using static objects with JVM reuse to accomplish what you describe here: http://chasebradford.wordpress.com/2011/02/05/distributed-cache-static-objects-and-fast-setup/
Another option, although more complicated, is to use distributed cache with a read-only memory mapped file. That way you can share the resource across the JVM processes as well.
In
$HADOOP_HOME/conf/mapred-site.xml
add the follow propertyThe
#
can be set to a number to specify how many times the JVM is to be reused (default is1
), or set to-1
for no limit on the reuse amount.Jobs can enable task JVMs to be reused by specifying the job configuration mapred.job.reuse.jvm.num.tasks. If the value is 1 (the default), then JVMs are not reused (i.e. 1 task per JVM). If it is -1, there is no limit to the number of tasks a JVM can run (of the same job). One can also specify some value greater than 1 using the api.
To my best knowledge, there is no easy way for multiple map tasks (Hadoop) to share static data structures.
This is actually a known problem for current Map Reduce model. The reason that current implementation doesn't share static datas across map tasks is because Hadoop is designed to be highly reliable. As a result, if a task fails, it will only crash its own JVM. It will not impact the execution of other JVMs.
I am currently working on a prototype that can distribute the work of a single JVM across multiple cores (essentially you just need one JVM to utilize multi cores). This way, you can reduce the duplication of in memory data structures without costing CPU utilization. The next step for me is to develop a version of Hadoop that can run multiple Map tasks within one JVM, which is exactly what you are asking for.
There is an interesting post here https://issues.apache.org/jira/browse/MAPREDUCE-2123