Unable to start hive metastore - hive

i did all configuration in hive to connect mysql as metastore
but
hive --service metastore, not starting metastore
logs are like
2023-01-16 23:35:00: Starting Hive Metastore Server
i did metastore configuration in hive-site.xml

Related

Not able to launch apache hive through cli

I have kerberized Hadoop Hortonworks cluster running. Beeline works fine.
But When I am launching hive it fails with the follwoing error:
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
[root#hdpm1 ~]# su - hive
[hive#hdpm1 ~]$ hive
Before runing beeline you must get TGT using kinit
example for hive user using service keytab:
kinit -kt <path_to keystore> <principal_name>
kinit -kt /etc/security/keytabs/hive.service.keytab hive/<host>

Setting up Spark Thrift Server on AWS EMR for making JBDC/ODBC connection

how to set up the Spark Thrift Server on EMR? I am trying to make a JDBC/ODBC connection to EMR using Spark Thrift Server. For e.g.
beeline> !connect jdbc:hive2://10.253.3.5:10015
We execute the following to restart the Hive-Server2 -
sudo stop hive-server2
sudo stop hive-hcatalog-server
sudo start hive-hcatalog-server
sudo start hive-server2
Not sure what are the services to restart Spark Thrift Server on AWS EMR and how to set up the User Id and Password.
We need to start the Spark thrift Server by executing the following on EMR-
sudo /usr/lib/spark/sbin/start-thriftserver.sh --master yarn-client
The Default port is 10001
Test Connection as below -
/usr/lib/spark/bin/beeline -u 'jdbc:hive2://x.x.x.x:10001/default' -e "show databases;"
Spark JDBC Driver can be used to connect to the Thrift Server from any application

Cannot overwrite Hive Scratchdir using Beeline

When opening a hiveserver2 connection using beeline, setting the hive scratchdir is not working; I am running HDP 2.4 having hive.server2.enable.doAs enabled. When I execute
beeline -u "jdbc:hive2://localhost:10000/;principal=hive/_HOST#COMPANY.COM" \
--hiveconf hive.exec.scratchdir=/tmp/user/somedir
I get a Ranger Security Permisson writing to /tmp/hive. The restricted properties don't contain hive.exec.scratchdir.
How can I configure/set/overwrite this setting on runtime?

Where does "insert overwrite local directory" create file on local file system?

The INSERT OVERWRITE LOCAL DIRECTORY command in Hive creates a file on the local filesystem, but which node's local filesystem will the file be created on? Will it always be the namenode or any node that happens to run the job?
which node's local filesystem will the file be created on?
The file will be created on the system where you execute the hive query.
Example: If you have two nodes, one namenode and one slave node. And if you run the hive query on slave node, the file will be created on the slave node's filesystem.
NOTE: If you want to have two hive installations on both namenode and slave node, just use this property: hive.metastore.uris to point to both namenode and slave node locations.
The property should be like this:
<property>
<name>hive.metastore.uris</name>
<value>thrift://namenode-ip:9083,thrift://slave-ip:9083</value>
</property>
Just change namenode-ip and slave-ip to respective IP addresses.
The file will be created locally on the node from which you execute the job.

graceful_stop.sh not found in HDP2.1 Hbase

I was reading Hortonworks documenrtation to remove regionserver from any host of cluster (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_system-admin-guide/content/admin_decommission-slave-nodes-3.html).
It uses graceful_stop.sh script . The same script is described at Apache Hbase book (https://hbase.apache.org/book/node.management.html)
I was trying to find this script but not able to locate it .
hbase#node ~]$ ls /usr/lib/hbase/bin/
draining_servers.rb hbase.cmd hbase-daemon.sh region_status.rb test
get-active-master.rb hbase-common.sh hbase-jruby replication
hbase hbase-config.cmd hirb.rb start-hbase.cmd
hbase-cleanup.sh hbase-config.sh region_mover.rb stop-hbase.cmd
[hbase#node ~]$
Is this script is removed from hbase ?
Is there any other way to stop a region server from anyother host of cluster. For eg - I want to stop region server 1 . Can I do this by logging into region server2?
Yes, the script is removed from hbase if you use package install. But you can still find it in src files.
If you want to stop a region server A from another host B, then host B must have privilege to access A. e.g. You have added public key of host B to authorized_keys in A. For a typical cluster, a RS cannot login to other RS directly for security.
For how to write graceful_stop.sh by yourself, you can look at: https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/fA3019_vpZY