The root scratch dir: /tmp/hive on HDFS should be

2019-01-19 23:26发布

I have changed permission using hdfs command. Still it showing same error.

The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: -wx------

Java Program that I am executing.

import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;
import org.apache.hive.jdbc.HiveDriver;

public class HiveCreateDb {
   private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";

   public static void main(String[] args) throws Exception {
      // Register driver and create driver instance

         Class.forName(driverName);

/*  try {
  Class.forName(driverName);
} catch(ClassNotFoundException e) {
  print("Couldn't find Gum");

} */     // get connection

      Connection con = DriverManager.getConnection("jdbc:hive://", "", "");

      Statement stmt = con.createStatement();

      stmt.executeQuery("CREATE DATABASE userdb");
      System.out.println("Database userdb created successfully.");

      con.close();
   }
}

It is giving a runtime error for connecting hive.

Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------ enter image description here

标签: hadoop hive hdfs
4条回答
一夜七次
2楼-- · 2019-01-19 23:47

We are executing spark job in local mode. That means there is no writable permission to the directory /tmp/hive in local (linux) machine.

So execute chmod -R 777 /tmp/hive. That solved my issue.

Referred from::: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--------- (on Linux)

查看更多
放我归山
3楼-- · 2019-01-19 23:56

Try this

hadoop fs -chmod -R 777 /tmp/hive/;

I had a similar issue while running a hive query, using the -R resolved it.

查看更多
Fickle 薄情
4楼-- · 2019-01-19 23:56

Don't do chmod (777)... The correct is (733):

Hive 0.14.0 and later: HDFS root scratch directory for Hive jobs, which gets created with write all (733) permission. For each connecting user, an HDFS scratch directory ${hive.exec.scratchdir}/ is created with ${hive.scratch.dir.permission}.

Try to do this with hdfs user:

    hdfs dfs -mkdir /tmp/hive
    hdfs dfs -chown hive /tmp/hive/$HADOOP_USER_NAME
    hdfs dfs -chmod 733 /tmp/hive/$HADOOP_USER_NAME
    hdfs dfs -mkdir /tmp/hive/$HADOOP_USER_NAME
    hdfs dfs -chown $HADOOP_USER_NAME /tmp/hive/$HADOOP_USER_NAME
    hdfs dfs -chmod 700 /tmp/hive/$HADOOP_USER_NAME

This works, instead you can change scratchdir path with (from hive):

set hive.exec.scratchdir=/somedir_with_permission/subdir...

more info: https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration

查看更多
倾城 Initia
5楼-- · 2019-01-20 00:00

Just to add to the previous answers, if your username is something like 'cloudera' (you could be using cloudera manager/cloudera quickstart as your implementation platform), you could do the following:

sudo -u hdfs hadoop fs -chmod -R 777 /tmp/hive/;

Remember that in hadoop, 'hdfs' is the superuser and not 'root' or 'cloudera'.

查看更多
登录 后发表回答