FAILED: Error in metadata: java.lang.RuntimeExcept

2020-02-26 12:51发布

问题:

I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can't execute any of my DDL Tasks e.g. "show tables" or "describe tablename" etc. It is giving me the error as below

ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

Can anybody suggest what do I need to do to get my metastore_db instantiated without recreating the tables? Otherwise, I have to duplicate the effort of creating the entire database/schema once again.

回答1:

I have resolved the problem. These are the steps I followed:

  1. Go to $HIVE_HOME/bin/metastore_db
  2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
  3. Deleted the lock entries from db.lck and dbex.lck
  4. Log out from hive shell as well as from all running instances of HDFS
  5. Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
  6. Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
  7. Log out from all hive shell and HDFS instances
  8. Relogin and you should see your old tables

Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.

Advantage: You don't have to duplicate the effort of re-creating the entire database.

Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead



回答2:

I was told that generally we get this exception if we the hive console not terminated properly. The fix:

Run the jps command, look for "RunJar" process and kill it using kill -9 command



回答3:

See: getting error in hive

Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?

For instance, if you're using MySQL to hold your metadata db, you wll need to copy

mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.

This fixed that same error for me.



回答4:

I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:

Login as $HIVE_USER

nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log & 


回答5:

I had a similar problem with hive server and followed the below steps:
1. Go to $HIVE_HOME/bin/metastore_db
2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
3. Deleted the lock entries from db.lck and dbex.lck
4. Relogin from hive shell. It is working
Thanks



回答6:

For instance, I use MySQL to hold metadata db, I copied

mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder

My error resolved



回答7:

I also was facing the same problem, and figured out that I had both hive-deafult.xml and hive-site.xml(created manually by me),

I moved my hive-site.xml to hive-site.xml-template(as I was not needed this file) then started hive, worked fine.

Cheers, Ajmal



回答8:

I have faced this issue and in my case it was while running hive command from command line.

I resolved this issue by running kinit command as I was using kerberized hive.

kinit -kt <your keytab file location> <kerberos principal>