Configuring hive in local mode - apache

I am trying to set up hive-0.9.0 in local mode configuration. In /conf, I have created hive-site.xml and specified the property for warehouse folder.
But I think hive is not using my defined location as it is not creating the 'warehouse' folder in that location.
Also, is it necessary to have hadoop cluster running in local mode hive configuration as it throws error when I issue any DDL commands without starting hadoop cluster.
FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused)
The contents of hive-site.xml is as follows:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/home/hadoopuser/hive/warehouse</value>
<description>
Local or HDFS directory where Hive keeps table contents.
</description>
</property>
<property>
<name>hive.metastore.local</name>
<value>true</value>
<description>
Use false if a production metastore server is used.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/home/hadoopuser/hive/metastore_db;create=true</value>
<description>
The JDBC connection URL.
</description>
</property>
</configuration>

Hive queries internally runs mapreduce. So hadoop has to be up while you are trying to query Hive.

Related

when initializing Hive for the first-time getting error on hive-site.xml

I am unable to find the cause of the below error as it points to hive-site.xml
so far what I have configured is completely correct.
FYI i am using hadoop 3.1.1 and hive 3.1.1 and mysql for hive metastore.
adminn#master:~$ schematool -initSchema -dbType mysql
Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3210,96,"file:/home/adminn/apache-hive-3.1.1-bin/conf/hive-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3003)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2931)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1460)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4990)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5063)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5150)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5098)
at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3210,96,"file:/home/adminn/apache-hive-3.1.1-bin/conf/hive-site.xml"]
at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:621)
at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:491)
at com.ctc.wstx.sr.StreamScanner.reportIllegalChar(StreamScanner.java:2456)
at com.ctc.wstx.sr.StreamScanner.validateChar(StreamScanner.java:2403)
at com.ctc.wstx.sr.StreamScanner.resolveCharEnt(StreamScanner.java:2369)
at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1515)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2828)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3257)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3063)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2986)
... 15 more
given below is the hive-site.xml file where i made the required changes.
some imp part of hive-site.xml
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Hivehadoop#123$</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.ds.connection.url.hook</name>
<value/>
<description>Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used</description>
</property>
<property>
<name>javax.jdo.option.Multithreaded</name>
<value>true</value>
<description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/${user.name}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
this is solved, i removed the special character form the specified line number. then its working fine.

Connect timeout from Presto / Trino to Amazon S3

I currently have a Kubernetes setup outside of AWS where a data lake which resides in Amazon S3 gets queried using Presto v348. Data is stored in parquet file format. Additional component is a Hive metastore.
I encounter the following error and am at a loss on regards to troubleshooting the underlying issue:
io.prestosql.spi.PrestoException: Unable to execute HTTP request: Connect to s3-eu-central-1.amazonaws.com:80 [s3-eu-central-1.amazonaws.com] failed: connect timed out
This issue sometimes arises with bigger queries and interestingly brings the system into a state where all following queries time out. There are cases where in 1/5 of tries the query will succeed. Smaller queries in general work perfectly fine. This gets better after about 10-20min. Restarting Presto does not solve the 10-20min problem. Therefore I suspect there must be another problem.
I am aware of the fact that I might run into a performance ceiling, but the fact that instead of an error there are just timeouts and the whole system is unusable for 10-20 minutes is not acceptable.
I have already increased configs like hive.s3.max-connections in Presto and fs.s3a.connection.maximum in the metastore config but it doesn't seem to solve the problem. Besides these, I found no suggestions on how to tweak the setup to prevent the error from happening.
Presto connector config:
connector.name=hive-hadoop2
hive.metastore.uri=thrift://hive-metastore:9083
hive.metastore.username=prestodb
hive.s3.aws-access-key="S3_ACCESS_KEY"
hive.s3.aws-secret-key="S3_SECRET_KEY"
hive.s3.endpoint=s3-eu-central-1.amazonaws.com
hive.s3.ssl.enabled=false
hive.s3.path-style-access=true
hive.parquet.use-column-names=true
hive.allow-drop-table=true
hive.s3-file-system-type=PRESTO
hive.s3.max-connections=50000
hive.s3select-pushdown.max-connections=50000
hive.s3.connect-timeout=60s
hive.allow-rename-column=true
Metatore config:
core-site.xml: |
<configuration>
<property>
<name>fs.s3a.connection.ssl.enabled</name>
<value>false</value>
</property>
<property>
<name>fs.s3a.access.key</name>
<value>xxx</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>xxx</value>
</property>
<property>
<name>fs.s3a.fast.upload</name>
<value>true</value>
</property>
<property>
<name>fs.s3a.connection.maximum</name>
<value>50000</value>
</property>
<property>
<name>fs.s3a.connection.establish.timeout</name>
<value>60000</value>
</property>
<property>
<name>fs.s3a.threads.max</name>
<value>64</value>
</property>
<property>
<name>fs.s3a.max.total.tasks</name>
<value>128</value>
</property>
</configuration>

'hiveserver2 not listening on port 10000 and 10001'

When I run:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
It shows
Starting HiveServer2
and nothing listens on port 10000 and 10001
The HiveServer2 service does not output error information, causing it hard to diagnostic the problem. You can try to start the metastore service provided by Hive, which listens on port 9083 and might give some information when your configuration is not properly set:
hive --service metastore # not detach from terminal to see logs
In my case, this service cannot be started, with error message:
MetaException(message:Hive Schema version 3.1.0 does not match metastore's schema
version 1.2.0 Metastoed or corrupt)
One of the direct solution to resolve this error is to ignore the version difference by setting the hive-site.xml if there is only one hive version in your machine (another solution is to modify the metastore_db version):
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
After this problem is resolved, the HiveServer2 service can be running and listening on port 10000.
hive --service hiveserver2 > /dev/null 2>&1 &
If your HiveServer2 access metastore via Derby or MySQL JDBC driver, then the aforementioned metastore service is not needed for HiveServer2. However, if HiveServer2 access metastore via thrift protocol, as configed in conf/hive-site.xml like
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop-master:9083</value>
<description>
Thrift URI for the remote metastore.
Used by metastore client to connect to remote metastore.
</description>
</property>
Then, the metastore service must be started at first.
I had a hard time to set up hive-3.1.2. I write this maybe it helps someone out. in order to diagnose the problem first try to launch metastore and hiveserver2 like this:
metastore:
hive --service metastore --hiveconf hive.root.logger=INFO,console
hiveserver2:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
then carefully read the the exceptions were thrown.
my problem was user hive is not allowed to perform this api call
and to solve that I added the following property to hive-site.xml:
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
<description>
Should metastore do authorization against database notification related APIs such as get_next_notification.
If set to true, then only the superusers in proxy settings have the permission
</description>
</property>
also I add my full hive-site.xml as a sample:
<configuration>
<property>
<name>datanucleus.schema.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://server-2:3306/metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>mysql_username</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mysql_password</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://server-2:9083</value>
</property>
<property>
<name>atanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>server-2</value>
</property>
<property>
<name>hive.server2.transport.mode</name>
<value>binary</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
</configuration>
Thanks. There is typo. It should hive.metastore not as shown below.
**metastore**.metastore.event.db.notification.api.auth
false

Hive script containing load data inpath not working in oozie

My task is to create an oozie workflow to Load Data to Hive tables every hour.
I am using CDH 5.7 in virtualbox
When i run the hive script which contains LOAD DATA INPATH '/sqoop_import_increment' INTO TABLE customer; it works perfectly, data gets loaded to the hive table.
But When i run the same script on oozie workflow the job get killed at 66% and the error message is Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [10001]
Note: but hive script for create table works perfectly with oozie workflow.
plz help.
hive script:
use test;
create external table if not exists customer(customer_id int,name string,address string)row format delimited fields terminated by ',';
load data inpath /sqoop_import_increment into table customer;
workflow.xml:
<workflow-app name="hive_script" xmlns="uri:oozie:workflow:0.5">
<start to="hive-4327"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="hive-4327" cred="hcat">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>lib/hive-config.xml</job-xml>
<script>lib/impala-script.hql</script>
</hive>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
job.properties:
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=localhost:8032
nameNode=hdfs://quickstart.cloudera:8020
hive-config.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>cloudera</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/lib/hive/lib/hive-hwi-0.8.1-cdh4.0.0.jar</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
</configuration>
The last time I ran into this problem, it turned out that the hive client was not installed on all data nodes.
When you run the hive query manually, you presumably do it from a node that has the hive client installed.But when oozie is asked to run the query, it will do so from a random data node. As such you will need to setup the hive client on all data nodes.
This assumes that you are not able to let oozie run hive queries in general (and don't have any specific issues with this particular command).

Accesing Hdfs from Spark gives TokenCache error Can't get Master Kerberos principal for use as renewer

I'm trying to run a test Spark script in order to connect Spark to hadoop.
The script is the following
from pyspark import SparkContext
sc = SparkContext("local", "Simple App")
file = sc.textFile("hdfs://hadoop_node.place:9000/errs.txt")
errors = file.filter(lambda line: "ERROR" in line)
errors.count()
When I run it with pyspark I get
py4j.protocol.Py4JJavaError: An error occurred while calling
o21.collect. : java.io.IOException: Can't get Master Kerberos
principal for use as renewer
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:187)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:251)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:140)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:898)
at org.apache.spark.rdd.RDD.collect(RDD.scala:608)
at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:243)
at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:744)
This happens despite the facts that
I've done a kinit and a klist shows I have the correct tokens
when I issue a ./bin/hadoop fs -ls hdfs://hadoop_node.place:9000/errs.txt
it shows the file
Both the local hadoop client and spark have the same configuration file
The core-site.xml in the spark/conf and hadoop/conf folders is the following
(got it from one of the hadoop nodes)
<configuration>
<property>
<name>hadoop.security.auth_to_local</name>
<value>
RULE:[1:$1](.*#place)s/#place//
RULE:[2:$1/$2#$0](.*/node1.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:$1/$2#$0](.*/node2.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:$1/$2#$0](.*/node3.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:$1/$2#$0](.*/node4.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:$1/$2#$0](.*/node5.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:$1/$2#$0](.*/node6.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:$1/$2#$0](.*/node7.place#place)s/^([a-zA-Z]*).*/$1/
RULE:[2:nobody]
DEFAULT
</value>
</property>
<property>
<name>net.topology.node.switch.mapping.impl</name>
<value>org.apache.hadoop.net.TableMapping</value>
</property>
<property>
<name>net.topology.table.file.name</name>
<value>/etc/hadoop/conf/topology.table.file</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://server.place:9000/</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property>
</configuration>
Can someone point out what am I missing?
After creating my own hadoop cluster in order to better understand how hadoop works. I fixed it.
You have to provide Spark with a valid .keytab file which has been generated for an account which has at least read access to the hadoop cluster.
Also, you have to provide spark with the hdfs-site.xml of your hdfs cluster.
So for my case I had to create a keytab file which when you run
klist -k -e -t
on it you get entries like the following
host/fully.qualified.domain.name#REALM.COM
In my case the host was the literal word host and not a variable.
Also in your hdfs-site.xml you have to provide the path of the keytab file and say that
host/_HOST#REALM.COM
will be your account.
Cloudera has a pretty detailed writeup on how to do it.
Edit
after playing a little bit with different configurations I think the following should be noted.
You have to provide spark with the exact hdfs-site.xml and core-site.xml of your hadoop cluster. Otherwise it wont work