How can I config to access HDFS(namenode HA) with HiveConf class by using hive-common-1.2.1.jar? - hive

Who knows why class HiveConf has no HADOOPCONF enum type in hive-common jar now?
I write code using hive-common-1.2.1.jar HiveConf class to access HDFS(HA namenode), and I get an error below.
I realized my code didn't config HADOOPCONF so it can't connect to HDFS, but there is no HADOOPCONF in hive-common-1.2.1.jar any more, I found previous version of hive-common has the HADOOPCONF.
http://www.docjar.com/html/api/org/apache/hadoop/hive/conf/HiveConf.java.html
My question is how can I config to access HDFS(namenode HA) with HiveConf class by using hive-common-1.2.1.jar?
Here is the error:
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: cluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
My code is:
hiveConf.setVar(HiveConf.ConfVars.HADOOPBIN, "/opt/modules/hadoop/bin");
hiveConf.setVar(HiveConf.ConfVars.HADOOPFS, "hdfs://cluster");
hiveConf.setVar(HiveConf.ConfVars.LOCALSCRATCHDIR, "/opt/modules/hive/temp");
hiveConf.setVar(HiveConf.ConfVars.DOWNLOADED_RESOURCES_DIR, "/opt/modules/hive/temp");
hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
hiveConf.setVar(HiveConf.ConfVars.METASTOREWAREHOUSE, "/warehouse");
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://127.0.0.1:9083");
hiveConf.setVar(HiveConf.ConfVars.METASTORE_CONNECTION_DRIVER, "com.mysql.jdbc.Driver");
hiveConf.setVar(HiveConf.ConfVars.METASTORECONNECTURLKEY, "jdbc:mysql://192.168.5.29:3306/hive?createDatabaseIfNotExist=true");
hiveConf.setVar(HiveConf.ConfVars.METASTORE_CONNECTION_USER_NAME, "hive");
hiveConf.setVar(HiveConf.ConfVars.METASTOREPWD, "123456");
hiveConf.setVar(HiveConf.ConfVars.HIVEHISTORYFILELOC, "/opt/modules/hive/temp");

OK, I resolved this issue.
Because the class HiveConf in hive-common jar load "hdfs-site.xml" from hadoop by default, if only you set the classpath pointed to the folder of "hdfs-site.xml" when you running it.
CLASSPATH=$CLASSPATH:/opt/modules/hadoop/conf
$JAVA -cp $CLASSPATH com.baofeng.data.writer.HiveHcatalogWriter

Related

After infinispan 10.1.8 update: class file for org.infinispan.factories.scopes.Scopes not found

I just tried to update my application to Infinspan 10.1.8.Final. I am using Infinispan as a level-2 hibernate (5.4.18.Final) cache via this dependency in build.gradle:
compile group: 'org.infinispan', name: 'infinispan-hibernate-cache-v53', version: '10.1.8.Final'
The application compiles and starts, but the following is logged when I run the test suite:
warning: unknown enum constant Scopes.GLOBAL
reason: class file for org.infinispan.factories.scopes.Scopes not found
warning: unknown enum constant DataType.TRAIT
reason: class file for org.infinispan.jmx.annotations.DataType not found
Why is this happening? Do I need to include another dependency?
Try adding compileOnly 'org.infinispan:infinispan-component-annotations:10.1.8.Final' to the dependencies in your build.gradle file.
Both enums aren't required at runtime. They are using in compile time to generate metadata required by Infinispan.

new HiveConf exception NoClassDefFoundError: com/ctc/wstx/io/InputBootstrapper

I'm running HiveConf tests, and always get exception when new HiveConf saying that "java.lang.NoClassDefFoundError: com/ctc/wstx/io/InputBootstrapper"
I tried to explicitly add this jar dependency in maven pom.xml, but no effect.
Any idea how to solve this? I'm using hive 2.3.5.

Ignite configuration with absolute path

I downloaded Ignite 2.5.0 (I use maven dependences in Eclipse on a Mac for my Java class), and I tried to start Ignite with a configuration file given with an absolute path:
public static void main(String [] args) throws Exception {
try (Ignite ignite = Ignition.start("/Users/ahajnal/Documents/git/ignite/target/classes/default-config.xml")) {}
}
but I got exception:
Exception in thread "main" class org.apache.ignite.IgniteException: Failed to find configuration in: file:/Users/ahajnal/Documents/git/ignite/target/classes/default-config.xml
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
at org.apache.ignite.Ignition.start(Ignition.java:355)
at hu.sztaki.lpds.ml.ignite.WekaIgnite.main(WekaIgnite.java:82)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find configuration in: file:/Users/ahajnal/Documents/git/ignite/target/classes/default-config.xml
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:116)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:744)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:945)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693)
at org.apache.ignite.Ignition.start(Ignition.java:352)
... 1 more
The config file is there:
$ cat /Users/ahajnal/Documents/git/ignite/target/classes/default-config.xml
<?xml version="1.0" encoding="UTF-8"?>...
and:
new File("/Users/ahajnal/Documents/git/ignite/target/classes/default-config.xml").exists() is true
According to docs this path can be absolute.
What am I doing wrong?
Thank you.
I think, the problem is that default-config.xml file has only abstract IgniteConfiguration. This is the case in the default configuration file in examples.
Check, if the configuration bean's definition has abstract=true parameter, and remove it if it does.
P.S.
Creating Ignite as a resource of a try block is a pretty bad idea, since the node will stop right after execution of the try block is finished.

IllegalAccessError for RequestHedgingRMFailoverProxyProvider while launching Apache Twill Application in hadoop cluster after HDP upgrade

I'm trying to launch Apache Twill application from hadoop cluster, the cluster is recently upgraded from HDP 2.2 to HDP 2.5 but I'm getting llegalAccessError for RequestHedgingRMFailoverProxyProvider class . This class is part of org.apache.hadoop.yarn.client package. I'm getting this error in the Application Master. The job status goes directy to 'not running state' right after 'accepted state'.
Exception in thread "Hadoop22YarnAMClient STARTING" Exception in thread "YarnAMClientService STARTING" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:93)
at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.serviceStart(AMRMClientImpl.java:186)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.startUp(Hadoop21YarnAMClient.java:77)
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
at java.lang.Thread.run(Thread.java:745)
com.google.common.util.concurrent.ExecutionError: java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1008)
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
at org.apache.twill.internal.appmaster.ApplicationMasterMain$YarnAMClientService.startUp(ApplicationMasterMain.java:221)
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:93)
at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.serviceStart(AMRMClientImpl.java:186)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.startUp(Hadoop21YarnAMClient.java:77)
... 2 more
In general when you see IllegalAccessError this means you have a runtime incompatibility between compiled and runtime code. In this case,
the getProxyInternal() method of ConfiguredRMFailoverProxyProvider is now private. You need to recompile your client code and/or use updated hadoop client libraries to connect to your cluster.

Hive Set up issue "SINGLETON from class org.slf4j.LoggerFactory**"

HAdoop single node cluster is working fine. HAdoop is working fine. JPS/Web interface of hadoop working fine.
I have done set up of hive.
When am entering into hive from hadoop its giving me below error :
"Exception in thread "main" java.lang.IllegalAccessError: tried to access field org.slf4j.impl.StaticLoggerBinder.SINGLETON from class org.slf4j.LoggerFactory
Can someone help me in this case
If you get the slf4j IllegalAccessError shown above, then you are using an older version of slf4j-api, e.g. 1.4.3, with a new version of a slf4j binding, e.g. 1.5.6. As mentioned in the slf4j faq on IllegalAccessError, this typically occurs when your Maven pom.ml file incorporates hibernate 3.3.0 which declares a dependency on slf4j-api version 1.4.2. If your pom.xml declares a dependency on an slf4j binding, say slf4j-log4j12 version 1.5.6, then you will get illegal access errors.
Also refer to the solution here: IllegalAccessError slf4j