Cannot overwrite Hive Scratchdir using Beeline - hive

When opening a hiveserver2 connection using beeline, setting the hive scratchdir is not working; I am running HDP 2.4 having hive.server2.enable.doAs enabled. When I execute
beeline -u "jdbc:hive2://localhost:10000/;principal=hive/_HOST#COMPANY.COM" \
--hiveconf hive.exec.scratchdir=/tmp/user/somedir
I get a Ranger Security Permisson writing to /tmp/hive. The restricted properties don't contain hive.exec.scratchdir.
How can I configure/set/overwrite this setting on runtime?

Related

How to execute sql script for PostgreSQL database hosted on AWS?

I am trying to run an sql file for a database hosted on AWS RDS. The command I am using is the following:
psql -v user=myusername -v dbname=postgres -v passwd=mypassword -f ./explorerpg.sql
After running it I get the following result:
sql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
What am I missing? For those curious I am trying to get Hyperledger explorer to display the database contents of an AWS blockchain. The sql script is from Hyperledger explorer.
Any suggestions are greatly appreciated!
You're missing the -h (or --host) option. The command you're running is trying to connect to a PostgreSQL server on your localhost. Additionally, you're trying too hard with the command line. You want something more like:
psql -U myusername -d postgres -h dbhostname.randomcharacters.us-west-2.rds.amazonaws.com -v passwd=mypassword -f ./explorerpg.sql
The password likely should not be done this way - a file containing the password is more common. But the key is to find the hostname of your RDS server and specify it with the -h option.

Not able to launch apache hive through cli

I have kerberized Hadoop Hortonworks cluster running. Beeline works fine.
But When I am launching hive it fails with the follwoing error:
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
[root#hdpm1 ~]# su - hive
[hive#hdpm1 ~]$ hive
Before runing beeline you must get TGT using kinit
example for hive user using service keytab:
kinit -kt <path_to keystore> <principal_name>
kinit -kt /etc/security/keytabs/hive.service.keytab hive/<host>

Setting up Spark Thrift Server on AWS EMR for making JBDC/ODBC connection

how to set up the Spark Thrift Server on EMR? I am trying to make a JDBC/ODBC connection to EMR using Spark Thrift Server. For e.g.
beeline> !connect jdbc:hive2://10.253.3.5:10015
We execute the following to restart the Hive-Server2 -
sudo stop hive-server2
sudo stop hive-hcatalog-server
sudo start hive-hcatalog-server
sudo start hive-server2
Not sure what are the services to restart Spark Thrift Server on AWS EMR and how to set up the User Id and Password.
We need to start the Spark thrift Server by executing the following on EMR-
sudo /usr/lib/spark/sbin/start-thriftserver.sh --master yarn-client
The Default port is 10001
Test Connection as below -
/usr/lib/spark/bin/beeline -u 'jdbc:hive2://x.x.x.x:10001/default' -e "show databases;"
Spark JDBC Driver can be used to connect to the Thrift Server from any application

Impala Connection error

I am trying to run the below impala command in my cloudera cluster
impala-shell -i connect 10.223.121.11:21000 -d prod_db -f /home/cloudera/views/a.hql
but I get error as
Error, could not parse arguments "10.223.121.11:21000"
Could some one help me on this?
Flag -i should be defined as: -i hostname or --impalad=hostname (without connect)
command connect should be used within impala-shell Connecting to impalad through impala-shell
The default port of 21000 is assumed unless you provide another value.
So this should works:
impala-shell --impalad=10.223.121.11 -d prod_db -f /home/cloudera/views/a.hql
In own scenario, I was connected with the impala-shell but suddenly I got [Not connected] > . Trying to reconnect failed & I didn't want to restart my machine (which is another option).
And Trying this:
[Not connected] > CONNECT myhostname
did not help either
I realized that my IP did change.
By just adjusting my IP from dynamic to static fixed it.

How do you access an Amazon RDS instance from a chromebook?

I have accepted the "Chromebook challenge." So far, I have successfully ssh'ed into my new Google Compute Engine from ChromeOS's built in ssh terminal. But now I am faced with the task of connecting to an Amazon RDS (relational database service) instance that a consulting client has set up for me. I have found no tutorials how to do this. I don't know if I should be ssh'ing into the RDS, or what.
Has anyone else done this successfully?
Aha, so there is no way of ssh-ing to an RDS instance directly (Chromebook or otherwise), as Fredrick mentioned.
That said, I have accomplished all I needed by ssh-ing from my Chromebook into my Google Compute Engine, and then hopping from there to my RDS instance, using the standard:
me#myserver$mysql -h myrdsinstanceaddress -P 3306 -u root -p
So, the crux is that you have to ssh into some other server, and then work from there.
From the aws documentation.
Type the following command at a command prompt to connect to a DB instance using the MySQL utility. For the -h parameter, substitute the DNS name for your DB instance. For the -P parameter, substitute the port for your DB instance. Enter the master user password when prompted.
PROMPT> mysql -h myinstance.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u mymasteruser -p