facing hive query error on show databases query - Unable to instantiate - hive

I initialized hive and its worked, later I gave SHOW DATABASES command, but I got below error.
I am using mysql for metadata.
adminn#master:~$ hive
Hive Session ID = e9e9145a-0c38-4007-a9af-ded86a4226ea
Logging initialized using configuration in jar:file:/home/adminn/apache-hive-3.1.1-bin/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

I added the below property to the hive-site.xml file, and this resolved the issue.
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>

Related

Error when trying to start HiveServer2: NullPointerException in ThriftBinaryCLIService

When I start hiveserver2 with the following command:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
I receive the following error before the program exits:
2022-09-12T14:46:53,713 ERROR [Thrift Server] transport.TServerSocket: Could not set socket timeout.
java.net.SocketException: Socket is closed
at java.net.ServerSocket.setSoTimeout(ServerSocket.java:666) ~[?:1.8.0_292]
at org.apache.thrift.transport.TServerSocket.listen(TServerSocket.java:117) ~[hive-exec-3.1.3.jar:3.1.3]
at org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:146) ~[hive-exec-3.1.3.jar:3.1.3]
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:169) ~[hive-service-3.1.3.jar:3.1.3]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
Hive Session ID = 56c28481-2b0c-4712-808d-ff7ccf31b543
Hive Session ID = 9771e219-095c-4524-b34a-b8e05c335fc0
2022-09-12T14:48:03,871 ERROR [Thrift Server] thrift.ThriftCLIService: Exception caught by ThriftBinaryCLIService. Exiting.
java.lang.NullPointerException: null
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:169) ~[hive-service-3.1.3.jar:3.1.3]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
Here is a brief explanation of my setup:
I am using vagrant and VirtualBox to create a "virtual" cluster.
This is very loosely (since the repository hasn't been updated in a while, I have had to make many changes to get it to work) based on this repository - https://github.com/njvijay/vagrant-jilla-hadoop
I have created 5 nodes (1 name node and 4 data nodes). The namenode also contains yarnm hive, pig, spark, mysql, python etc.
I am using Ubuntu 14.04.6, Hadoop 2.10.1, Hive 3.1.3, Spark 3.3.0 and Pig 0.15
It seems that there may be some compatibility issue between Hadoop 2 and Spark 3. I was able to resolve the error after updating Hadoop, Hive and Spark to the latest versions.

Hive - derby - java.lang.SecurityException: sealing violation: package org.apache.derby.impl.services.locks is sealed

After installing Hive and Derby, and before running hive, I wanted to create metadata schema as:
schematool -initSchema -dbType derby
It was giving me the following error:
"Underlying cause: java.sql.SQLException : No suitable driver found
for jdbc:derby://home/hadoop/metastore_db;create=true"
I checked the CLASSPATH which is as follows:
.:/usr/lib/jvm/jre-1.8.0-openjdk/jre/lib:/usr/lib/jvm/jre-1.8.0-openjdk/lib:/usr/lib/jvm/jre-1.8.0-openjdk/lib/tools.jar:/usr/local/derby/db-derby-10.4.2.0-bin/lib/derbyclient.jar:/usr/local/derby/db-derby-10.4.2.0-bin/lib/derby.jar:/usr/local/derby/db-derby-10.4.2.0-bin/lib/derbytools.jar
The required jar files are there.
However, I copied derbytools.jar, derby.jar, and derbyclient.jar from
/usr/local/derby/db-derby-10.4.2.0-bin/lib/ to
/usr/local/hive/apache-hive-3.1.2-bin/lib/
. This solved the above error.
But now I am getting the following error.
"java.lang.SecurityException: sealing violation: package
org.apache.derby.impl.services.locks is sealed."
Some suggested in this mailing list checking if I am pointing the derby jar files twice in the classpath. Apparently, it is not repeated in the CLASSPATH.
Please let me know where I am going wrong.
Entries in the conf/hive-site.xml are as follows:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby://home/hadoop/metastore_db;create=true </value>
<!--<value>jdbc:derby://localhost:1527/metastore_db;create=true </value>-->
<description>JDBC connect string for a JDBC metastore </description>
</property>
Thanks,
Soumyadeep

"Configuring hive with derby"

hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby://localhost:1527/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.ClientDriver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
[user1#slave3 ~]$ hive
which: no hbase in (/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/user1/hadoop-2.9.0/bin:/usr/local/jdk1.8.0_161/bin:/home/user1/hadoop-2.9.0/sbin:/home/user1/sqoop-1.4.7.bin__hadoop-2.6.0/bin:/home/user1/apache-hive-2.3.2-bin/bin:/usr/local/derby/bin:/home/user1/.local/bin:/home/user1/bin:usr/local/jdk1.8.0_161/bin:/home/user1/hadoop-2.9.0/bin:/usr/local/jdk1.8.0_161/bin:/home/user1/hadoop-2.9.0/sbin:/home/user1/sqoop-1.4.7.bin__hadoop-2.6.0/bin:/home/user1/apache-hive-2.3.2-bin/bin:/usr/local/derby/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/user1/apache-hive-2.3.2-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/user1/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/home/user1/apache-hive-2.3.2-bin/lib/hive-common-2.3.2.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
enter image description here
I'm trying to install hive, but it gives SemanticException. can anyone help?
Have you started the metastore? Your metastore must be up and running before running the queries on hive.
run the below command to create the metastore db
schematool -dbType derby -initSchema
then start the metastore
hive --service metastore &
I got the same error, it is just because of two SLF4j are running but java uses only a single mandatory dependency so you just need to delete on of this file.
I delete this from my hive and it just worked.
All you have to do is, delete this file
log4j-slf4j-impl-2.6.2.jar!
from
/home/user1/apache-hive-2.3.2-bin/lib/
and you are good to go.

java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

I have Hadoop 2.7.1 and apache-hive-1.2.1 versions installed on ubuntu 14.0.
Why this error is occurring ?
Is any metastore installation required?
When we typing hive command on terminal how the xml's internally called, what is the flow of those xml's?
Any other configuration's required?
When I am writing the hive command on ubuntu 14.0 terminal it is throwing the below exception.
$ hive
Logging initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:426)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:624)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 19 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:426)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:426)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
... 48 more
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
... 66 more
Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
... 68 more
To avoid above error I created hive-site.xml with :
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/home/local/hive-metastore-dir/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hivedb?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>user</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<configuration>
Also provided the environment variables in ~/.bashrc file; Still the error persist
#HIVE home directory configuration
export HIVE_HOME=/usr/local/hive/apache-hive-1.2.1-bin
export PATH="$PATH:$HIVE_HOME/bin"
starting the hive metastore service worked for me.
First, set up the database for hive metastore:
$ hive --service metastore
`
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/validate_installation.html
Second, run the following commands:
$ schematool -dbType mysql -initSchema
$ schematool -dbType mysql -info
https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool
I did below modifications and I am able to start the Hive Shell without any errors:
1. ~/.bashrc
Inside bashrc file add the below environment variables at End Of File : sudo gedit ~/.bashrc
#Java Home directory configuration
export JAVA_HOME="/usr/lib/jvm/java-9-oracle"
export PATH="$PATH:$JAVA_HOME/bin"
# Hadoop home directory configuration
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HIVE_HOME=/usr/lib/hive
export PATH=$PATH:$HIVE_HOME/bin
2. hive-site.xml
You have to create this file(hive-site.xml) in conf directory of Hive and add the below details
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
</configuration>
3. You also need to put the jar file(mysql-connector-java-5.1.28.jar) in the lib directory of Hive
4. Below installations required on your Ubuntu to Start the Hive Shell:
MySql
Hadoop
Hive
Java
5. Execution Part:
Start all services of Hadoop: start-all.sh
Enter the jps command to check whether all Hadoop services are up and running: jps
Enter the hive command to enter into hive shell: hive
If you're just playing around in local mode, you can drop metastore DB and reinstate it:
rm -rf metastore_db/
$HIVE_HOME/bin/schematool -initSchema -dbType derby
In my case when i tried
$ hive --service metastore
I got
MetaException(message:Version information not found in metastore. )
The necessary tables required for the metastore are missing in MySQL. Manually create the tables and restart hive metastore.
cd $HIVE_HOME/scripts/metastore/upgrade/mysql/
< Login into MySQL >
mysql> drop database IF EXISTS <metastore db name>;
mysql> create database <metastore db name>;
mysql> use <metastore db name>;
mysql> source hive-schema-2.x.x.mysql.sql;
metastore db name should match the database name mentioned in hive-site.xml files connection property tag.
hive-schema-2.x.x.mysql.sql file depends on the version available in the current directory. Try to go for the latest because it holds many old schema files also.
Now try to execute hive --service metastore
If everything goes cool, then simply start the hive from terminal.
>hive
I hope the above answer serves your need.
Run hive in debug mode
hive -hiveconf hive.root.logger=DEBUG,console
and then execute
show tables
can find the actual problem
you just need to instantiate schema and you can do the same with the below commands.i dit it and am able to run hive query without throwing
ERROR:Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
cd $HIVE_HOME
mv metastore_db metastore_db_bkup
schematool -initSchema -dbType derby
bin/hive
now run your query:
hive> show databases;
I have used MySQL DB for Hive MetaStore. Please follow the below steps:
in hive-site.xml the metastore should be proper
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastorecreateDatabaseIfNotExist=true&useSSL=false</value>
</property>
go to the mysql: mysql -u hduser -p
then run drop database metastore
then come out from MySQL and execute schematool -initSchema dbType mysql
Now error will go.
In the middle of the stack trace, lost in the "reflection" junk, you can find the root cause:
The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
This is probably due to its lack of connections to the Hive Meta Store,my hive Meta Store is stored in Mysql,so I need to visit Mysql,So I add a dependency in my build.sbt
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.38"
and the problem is solved!
mybe your hive metastore are inconsistent! I'm in this scene.
first. I run
$ schematool -dbType mysql -initSchema
then I found this
Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
then I run
$ schematool -dbType mysql -info
found this error
Hive distribution version: 2.3.0
Metastore schema version: 1.2.0
org.apache.hadoop.hive.metastore.HiveMetaException: Metastore schema version is not compatible. Hive Version: 2.3.0, Database Schema Version: 1.2.0
so i format my hive metastore, then it's done!
drop mysql database, the database named hive_db
run schematool -dbType mysql -initSchema for initialize metadata
I have also faced this problem but i had restart Hadoop and use command hadoop dfsadmin -safemode leave
now start hive it will work i think
I solved this problem by removing --deploy-mode cluster from spark-submit code.
By default , spark submit takes client mode which has following advantage :
1. It opens up Netty HTTP server and distributes all jars to the worker nodes.
2. Driver program runs on master node , which means dedicated resources to driver process.
While in cluster mode :
1. It runs on worker node.
2. All the jars need to be placed in a common folder of the cluster so that it is accessible to all the worker nodes or in folder of each worker node.
Here it's not able to access hive metastore due to unavailability of hive jar to any of the nodes in cluster.
1- Append the following lines to the startup file ~/.bashrc
export HIVE_HOME=~/hive
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$HADOOP_HOME/lib/*
export CLASSPATH=$CLASSPATH:HIVE_HOME/lib/*:.
2- Edit the file $HIVE_HOME/conf/hive-env.sh
cd $HIVE_HOME/conf
cp hive-env.sh.template hive-env.sh
3- Edit the hive-env.sh file to append the following line
export HADOOP_HOME=$HADOOP_HOME
4- There are two different versions of the "guava" library used by Hadoop and Hive. The solution is to use the same version of "guava" in both, Hadoop and Hive: Note: (My system is using guava-27, that's why I shared this example. It depends on your guava version of Hadoop and Hive. You need to check it)
cp ~/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar ~/hive/lib/
rm ~/hive/lib/guava-19.0.jar
5- In the conf directory of hive (~/hive/conf), create the metastore:
schematool -initSchema -dbType derby
6- We also need to create the configuration file of Hive, in the conf directory:
cp hive-default.xml.template hive-site.xml
In the file, hive-site.xml, at line 3215, there are some characters that should be deleted around column 96. The four characters to delete are: 
7- Edit the file hive-site.xml as follows:
Replace the occurrences of
${system:java.io.tmpdir} with /tmp/hive_io
Replace ${system:user.name} with hadoop
Note: ${system:java.io.tmpdir} occurs 3-4 times in hive-site.xml. Make sure change everything.
8- Finally you can type hive. (Don't forget, first of all you need to run hadoop before hive) Note: make sure starting the hive under hive path.
just open the hive terminal from the hive folder,after editing (bashrc) and (hive-site.xml) files.
Steps--
open hive folder where it is installed.
now open terminal from folder.
In my case, I stopped my docker hive container and run it again and finally, it worked. Hope it will be useful for someone.
Note: This might be caused because there might be an instance running in the background so stopping the container will stop all background instances.
This is happening because you have NOT started the Hive Metastore ... the simples way to do that is to use the default Derby database one ... you can follow this link : https://sparkbyexamples.com/apache-hive/hive-hiveexception-java-lang-runtimeexception-unable-to-instantiate-org-apache-hadoop-hive-ql-metadata-sessionhivemetastoreclient/
I Fixed this Problem by creating and copying hive-default.xml.template to hive-site.xml. To create that use can use below commands
cd /usr/local/Cellar/hive/2.7.1/libexec/conf (please replace hive version)
cp hive-default.xml.template hive-site.xml
and changed the values of the below properties in hive-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>false</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hive</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hive</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/tmp/hive</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.druid.metadata.db.type</name>
<value>mysql</value>
<description>
Expects one of the pattern in [mysql, postgresql, derby].
Type of the metadata database.
</description>
</property>
</configuration>
After that, I created DB in MySql with the name matastore, created a usename password and grand permissions for it by using the following queries.
$ mysql
mysql> CREATE DATABASE metastore;
mysql> USE metastore;
mysql> CREATE USER 'hiveuser'#'localhost' IDENTIFIED BY 'password';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,ALTER,CREATE ON metastore.* TO 'hiveuser'#'localhost';
and run the script in MySql with the following command:
mysql> source /usr/local/Cellar/hive/3.1.2_3/libexec/scripts/metastore/upgrade/mysql/hive-schema-3.1.0.mysql.sql
Also don't forget to move SQL connector jar to hive package using the following commands
Download MySQL connector and extract it
tar zxvf mysql-connector-java-5.1.35.tar.gz
sudo cp mysql-connector-java-5.1.35/mysql-connector-java-5.1.35-bin.jar /usr/local/Cellar/hive/2.7.1/libexec/lib/
That's it. Now I can successfully run show tables, etc commands in Hive. :)

Use saiku with apache hive

have you ever used Saiku to make data analysis on BigData Platform (Hadoop)? My recent work need to integrate some legacy BI tools with Hadoop to support common OLAP queries on HDFS/HBase.
I found a solution implemented with Phoenix & Hbase here, which bridges saiku and Hbase with SQL Dialect in Phoenix and it worked. However, this method can only handle data within HBase through HBase-API. It cannot boost any Map-Reduce style job when building the data cube. I prefer some more BigData compatible alternatives, like through Apache Hive.
Saiku is based on Mondrian. My version of Saiku use Mondrian-4.0.0.0-SNAPSHOT.jar, which I found can already work well with Hive. And I found that there are many Hive-0.13 jars within Saiku's lib directory. So I thought a simple config of hive2 datasource can work. I started an hiveserver2 in the namenode of my HDFS cluster and add following datasource into saiku.
Name: hive2
Connection Type: Mondrian
URL: jdbc:hive2://localhost:10000/default
Schema: /datasources/movie.xml
Jdbc Driver: org.apache.hive.jdbc.HiveDriver
Username: ubuntu
Password: XXXX
The saiku indeed successfully connected to the hiveserver2 but failed to load the datasource. I found following error in the saiku log:
name:hive2
driver:mondrian.olap4j.MondrianOlap4jDriver
url:jdbc:mondrian:Jdbc=jdbc:hive2://localhost:10000/default;Catalog=mondrian:///datasources/movie.xml;JdbcDrivers=org.apache.hive.jdbc.HiveDriver
12:41:48,110 WARN [RolapSchema] Model is in legacy format
12:41:50,464 ERROR [SecurityAwareConnectionManager] Error connecting: hive2
mondrian.olap.MondrianException: Mondrian Error:Internal error: while quoting identifier
at mondrian.resource.MondrianResource$_Def0.ex(MondrianResource.java:992)
at mondrian.olap.Util.newInternal(Util.java:2543)
at mondrian.spi.impl.JdbcDialectImpl.deduceIdentifierQuoteString(JdbcDialectImpl.java:245)
at mondrian.spi.impl.JdbcDialectImpl.<init>(JdbcDialectImpl.java:146)
at mondrian.spi.DialectManager$DialectManagerImpl$1.createDialect(DialectManager.java:210)
...
Caused by: java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveDatabaseMetaData.getIdentifierQuoteString(HiveDatabaseMetaData.java:342)
at org.apache.commons.dbcp.DelegatingDatabaseMetaData.getIdentifierQuoteString(DelegatingDatabaseMetaData.java:306)
at mondrian.spi.impl.JdbcDialectImpl.deduceIdentifierQuoteString(JdbcDialectImpl.java:238)
... 99 more
I looked into the hive 0.13 source. I found the getIdentifierQuoteString isn't implemented yet and simply throw an exception.
public String getIdentifierQuoteString() throws SQLException {
throw new SQLException("Method not supported");
}
Till now I'm puzzled. Is it practical to use the saiku with a hive? It has Hive 0.13 jars in its lib dir but cannot load a simple hive datasource? Should I simply modify the source of hive. I found in the newly released Hive 1.0. This function is implemented by simple return an empty string.
Does anyone has good idea? Thanks!