Apache Phoenix UDF not working on server side - apache

1 I have created jar with custom UDF function and copied jar into dynamic.jar.dir so when I use my UDF function as part of SELECT I getting result without issues.
2 But when function is a part of WHERE clause I getting error that class of my custom function is not found.
select PK FROM "my.custom.view" where MY_FUN(ARRAY["COLF"."COL1"], 'SOMEPARAM') limit 1;
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: BooleanExpressionFilter failed during reading: java.lang.ClassNotFoundException: com.myCompany.phoenix.MyCustomFunction
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at org.apache.phoenix.filter.BooleanExpressionFilter.readFields(BooleanExpressionFilter.java:109)
at org.apache.phoenix.filter.SingleKeyValueComparisonFilter.readFields(SingleKeyValueComparisonFilter.java:133)
at org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:131)
at org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:101)
at org.apache.phoenix.filter.SingleCQKeyValueComparisonFilter.parseFrom(SingleCQKeyValueComparisonFilter.java:50)
... 16 more
base-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:57000/user/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>21081</value>
</property>
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>0</value>
</property>
<!-- SEP is basically replication, so enable it -->
<property>
<name>hbase.replication</name>
<value>true</value>
</property>
<property>
<name>hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily</name>
<value>128</value>
</property>
<property>
<name>hbase.fs.tmp.dir</name>
<value>/tmp/hbase</value>
</property>
<property>
<name>phoenix.functions.allowUserDefinedFunctions</name>
<value>true</value>
</property>
<property>
<name>hbase.dynamic.jars.dir</name>
<value>${hbase.rootdir}/lib/</value>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
</configuration>
Manually adding jar with:
hdfs dfs -copyFromLocal -f /my.jar hdfs:///user/hbase/lib/my.jar
For function creation using:
CREATE FUNCTION MY_FUN(BINARY[], VARCHAR) RETURNS BOOLEAN as 'com.myCompany.phoenix.MyCustomFunction' using jar 'hdfs://localhost:57000/user/hbase/lib/my.jar';

I ran into something similar when I upgraded to Phoenix 5.0 from 4.7. I got an exception stating that I now needed to place my UDF .jar into /apps/hbase/data/lib due to permission issues. In the old environment I was able to get away using the /apps/hbase/lib directory. Maybe this is happening to you as well but it's not alerting you to the new path change.

Related

Getting 'org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family table does not exist in region hbase:meta'

I'm trying to integrate hive and hbase, but when i create (external) table in hive with hbase handler:
create external table entity_hbase(id bigint, value string, ts bigint, entity_type tinyint)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties ('hbase.columns.mapping'=':key,f1:value,f1:timestamp,f1:entity_type')
tblproperties('hbase.table.name'='entity');
i get this error:
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:
Column family table does not exist in region hbase:meta
First of all, i don't understand why error says that column family table (not f1) does not exist. Even if i create table in hbase and then try to create external table in hive i will get the same error.
Before all of this, my steps were:
1. start dfs
2. start yarn
3. start metastore db for hive
4. start metastore service
5. start hbase
6. using hive shell try to create table with hbase handler
hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://localhost:5432/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>postgres</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value></value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/Users/home/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://0.0.0.0:9083</value>
</property>
</configuration>
hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/Users/home/hbase/zookeeper</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/3.1.1/hdfs/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>
Hadoop version: 3.1.1
Hive version: 3.1.1
Hbase version: 1.2.9

hive - use external or local s3 instead of aws s3

I have my own s3 running locally instead of aws s3. Is there a way to overwrite s3.amazonaws.com?
I have created hive-site.xml and put it in ${HIVE_HOME}/conf/.
This is what I have got in .xml:
<configuration>
<property>
<name>fs.s3n.impl</name>
<value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value>
</property>
<property>
<name>fs.s3n.endpoint</name>
<value>local_s3_ip:port</value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>VALUE</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>VALUE</value>
</property>
Now I want to create table and if I put:
LOCATION('s3n://hive/sample_data.csv')
I have an error:
org.apache.hadoop.hive.ql.exec.DDLTask. java.net.UnknownHostException: hive.s3.amazonaws.com: Temporary failure in name resolution
It doesn't work neither for s3 nor s3n.
Is it possible to overwrite default s3.amazonaws.com and use my own s3?
Switch to the S3A Connector (and Hadoop 2.7+ JARs)
set "fs.s3a.endpoint" to the hostname of your server
and "fs.s3a.path.style.access" = true (rather than expect every bucket to have DNS)
Expect to spend time working on authentication options as signing is always a troublespot in third-party stores.
With this configuration I am able to reach my own s3 endpoint.
<configuration>
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
<property>
<name>fs.s3a.endpoint</name>
<value> <ip>:<port> </value>
</property>
<property>
<name>fs.s3a.path.style.access</name>
<value>true</value>
</property>
<property>
<name>fs.s3a.access.key</name>
<value> <ak> </value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value> <sk> </value>
</property>
<property>
<name>fs.s3a.awsAccessKeyId</name>
<value> <ak> </value>
</property>
<property>
<name>fs.s3a.awsSecretAccessKey</name>
<value> <sk> </value>
</property>
<property>
<name>fs.s3a.connection.ssl.enabled</name>
<value>false</value>
</property>

test to see if HiveServer2 metastore is working correctly

I recently upgraded our cluster's HiveServer to HiveServer2. I also set up the Hive Metastore (in remote mode) and moved away from embedded mode (which we were previously running).
I want to test that things are properly configured and that the metatdata is acutally being stored in the remote metastore. What would be the easiest way to do this? Are their certain logs I could check to verify this behavior?
I am afraid things are not configured correctly, and I am still running my metastore in local mode, as when I query the postgresql database on the machine hosting the metastore, there are no rows in the metastore DB (despite the fact that I have created test tables through beeline).
It might be worth mentioning is that the end goal of this is to be able to query data stored in HDFS via SparkSQL. Do I need HiveServer2 to accomplish this? Apologies, I am new to a lot of this technology.
Here is my hive-site.xml:
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://w7/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://w7:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.warehouse.subdir.inherit.perms</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>mn</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>mn</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hive.zookeeper.namespace</name>
<value>hive_zookeeper_namespace_hive</value>
</property>
<property>
<name>hive.cluster.delegation.token.store.class</name>
<value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
<property>
<name>hive.server2.use.SSL</name>
<value>false</value>
</property>
<property>
<name>hive.support.concurrency</name>
<description>Enable Hive's Table Lock Manager Service</description>
<value>true</value>
</property>
</configuration>

getting error while submitting HIVE query through oozie

I'm totally new to oozie and I'm creating a workflow to run a hive query for simply displaying a table's data from hive using select statement but once i submitting the job its giving the below error.
JA017: Unknown hadoop job [job_local1866275230_0001] associated with action [0000000-150519212325700-oozie-oozi-W#adstest]. Failing this action!
Below is my hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost.localdomain:9083</value>
</property>
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>localhost</value>
</property>
<!-- workaround for https://issues.cloudera.org/browse/IMPALA-1416 -->
<property>
<name>hive.metastore.try.direct.sql</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.try.direct.sql.ddl</name>
<value>false</value>
</property>
Below is the workflow.xml
<workflow-app name="adstest" xmlns="uri:oozie:workflow:0.4">
<start to="adstest"/>
<action name="adstest">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>hive-conf.xml</job-xml>
<script>adstest.hql</script>
<file>hive-conf.xml#hive-conf.xml</file>
</hive>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
I didn't selected any parameter as its just a simple select query for displaying first 20 results from the table.
Let me know if if i have to make any chages in any conf file.
When an Oozie workflow is executed, Oozie checks the status of job and while the job is running Oozie will report the status as running, however after the job completes, it queries the data from history server and if the job id is not find at history server, Oozie fails to get the status and marks the status of the workflow as failed.
However, the workflow may have finished successfully and the output will be available. Resource Manager will also report the status of the application executed as FINISHED / SUCCEEDED.
Ensure that the below 2 parameters are same across all the nodes:
mapreduce.jobhistory.intermediate-done-dir
mapreduce.jobhistory.done-dir
Restart YARN services and History Server. Please refer this link for more details. https://support.pivotal.io/hc/en-us/articles/202530283-Oozie-logs-report-Unknown-hadoop-job-and-history-server-UI-not-populated

NoClassDefFoundError HBase with YARN

I know that this is one of the topic that's asked much. Still after I digged into all of the topics I could find (most of them talking about CLASSPATH), I cant solve mine.
Examples of the topics I found and tried:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
java.lang.NoClassDefFoundError with HBase Scan
I'm using Hadoop 2.5.1 with HBase 0.98.11 on Ubuntu 14.04
I set up pseudo-distributed mode and running hadoop with hbase successfully. After I want to set up the full-distributed mode, jobs fail with NoClassDefFound error. I tried adding "export HADOOP_CLASSPATH=/usr/local/hbase-0.98.11-hadoop2/bin/hbase classpath" into hadoop-env (also yarn-env), still dont work.
One notice I found is if I comment the
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
I can run the jobs SUCCESSFULLY. BUT it seems that I run it on single not multi node.
Here are some of the configs:
mapred-site
<property>
<name>mapred.job.tracker</name>
<value>hadoop1:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>`
hdfs-site
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
yarn-site
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run
</description>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
In yarn-env and hadoop-env there is just as default except the HADOOP_CLASSPATH (which doesn't change things even if I add it or not..)
Here is the error trace:
2015-04-25 23:29:25,143 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
at apriori2$FrequentItemsReduce.reduce(apriori2.java:550)
at apriori2$FrequentItemsReduce.reduce(apriori2.java:532)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1651)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
at org.apache.hadoop.mapred.MapTask.closeQuietly(MapTask.java:1990)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:774)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Really thanks for every help sir.
With Yarn, you need to set "yarn.application.classpath" property with the classpath for your MapReduce job. "export HADOOP_CLASSPATH" would not work with Yarn.