I am fairly new to both hadoop and docker.
I haven been working on extending the cloudera/quickstart docker image docker file and wanted to mount a directory form host and map it to hdfs location, so that performance is increased and data are persist localy.
When i mount volume anywhere with -v /localdir:/someDir
everything works fine, but that's not my goal. But when i do -v /localdir:/var/lib/hadoop-hdfs
both datanode and namenode fails to start and I get : "cd /var/lib/hadoop-hdfs: Permission denied". And when i do -v /localdir:/var/lib/hadoop-hdfs/cache
no permission denied but datanode and namenode, or one of them fails to start on starting the docker image and i can't find any useful information in log files about the reason for that.
Mayby someone came across this problem, or have some other solution for putting hdfs outside the docker container?
I've the same problem and I've managed the situation copying the entire
/var/lib
directory from container to a local directoryFrom terminal, start the
cloudera/quickstart
container without start all hadoop services:In another terminal copy the container directory to the local directory :
After all files copied from container to local dir, stop the container and point the
/var/lib
to the new target. Make sure the/local_var_lib
directory contains the hadoop directories (hbase, hadoop-hdfs, oozie, mysql, etc).Start the container:
You should run a