new HiveConf exception NoClassDefFoundError: com/ctc/wstx/io/InputBootstrapper - hive

I'm running HiveConf tests, and always get exception when new HiveConf saying that "java.lang.NoClassDefFoundError: com/ctc/wstx/io/InputBootstrapper"
I tried to explicitly add this jar dependency in maven pom.xml, but no effect.
Any idea how to solve this? I'm using hive 2.3.5.

Related

Exception while running Seedstack Application

I have used seedstack dependecies for Hibernate and JPA to create DAO services that performs crud operations on Database.
I am trying to Launch this Seedstack application module through Java application Launcher in eclipse, by SeedMain class.
In pom.xml - dependecy for undertow is given.
<dependency>
<groupId>org.seedstack.seed</groupId>
<artifactId>seed-web-undertow</artifactId>
</dependency>
When executing the SeedMain class, I am getting the below error snakeyaml error:-
Exception in thread "main" java.lang.NoSuchMethodError: org.yaml.snakeyaml.DumperOptions.setSplitLines(Z)V
at com.fasterxml.jackson.dataformat.yaml.YAMLGenerator.buildDumperOptions(YAMLGenerator.java:259)
at com.fasterxml.jackson.dataformat.yaml.YAMLGenerator.<init>(YAMLGenerator.java:232)
at com.fasterxml.jackson.dataformat.yaml.YAMLFactory._createGenerator(YAMLFactory.java:447)
at com.fasterxml.jackson.dataformat.yaml.YAMLFactory.createGenerator(YAMLFactory.java:397)
at org.seedstack.seed.core.internal.diagnostic.DefaultDiagnosticReporter.writeDiagnosticReport(DefaultDiagnosticReporter.java:75)
at org.seedstack.seed.core.internal.diagnostic.DefaultDiagnosticReporter.writeDiagnosticReport(DefaultDiagnosticReporter.java:67)
at org.seedstack.seed.core.internal.diagnostic.DiagnosticManagerImpl.dumpDiagnosticReport(DiagnosticManagerImpl.java:70)
at org.seedstack.seed.core.SeedMain.handleException(SeedMain.java:68)
at org.seedstack.seed.core.SeedMain.main(SeedMain.java:61)
As per my understanding the Error is due to some version inconsistency for snakeyaml, But for Seedstack as the versions for dependecies are resolved by seedstack-bom dependecy, so where exactly should I do the changes to resolve the error.
Thanks in Advance!
From reading the stacktrace, it seems that you have some error on startup which is handled by the handleException() method. This method then tries to write a YAML diagnostic report but ultimately fails due to the snakeyaml version issue you mentioned.
You should do two things:
Fix the snakeyaml dependency issue by looking into the dependency tree. This kind of problem is often caused by some library that makes Maven choose an older version. SeedStack needs at least jackson-dataformat-yaml version 2.9.4 which in turn needs at least snakeyaml 1.18.
Fix the other error by looking at the full stacktrace. When a diagnostic report cannot be written, the original exception is still printed on the console (on stderr).

Hive Set up issue "SINGLETON from class org.slf4j.LoggerFactory**"

HAdoop single node cluster is working fine. HAdoop is working fine. JPS/Web interface of hadoop working fine.
I have done set up of hive.
When am entering into hive from hadoop its giving me below error :
"Exception in thread "main" java.lang.IllegalAccessError: tried to access field org.slf4j.impl.StaticLoggerBinder.SINGLETON from class org.slf4j.LoggerFactory
Can someone help me in this case
If you get the slf4j IllegalAccessError shown above, then you are using an older version of slf4j-api, e.g. 1.4.3, with a new version of a slf4j binding, e.g. 1.5.6. As mentioned in the slf4j faq on IllegalAccessError, this typically occurs when your Maven pom.ml file incorporates hibernate 3.3.0 which declares a dependency on slf4j-api version 1.4.2. If your pom.xml declares a dependency on an slf4j binding, say slf4j-log4j12 version 1.5.6, then you will get illegal access errors.
Also refer to the solution here: IllegalAccessError slf4j

How can I config to access HDFS(namenode HA) with HiveConf class by using hive-common-1.2.1.jar?

Who knows why class HiveConf has no HADOOPCONF enum type in hive-common jar now?
I write code using hive-common-1.2.1.jar HiveConf class to access HDFS(HA namenode), and I get an error below.
I realized my code didn't config HADOOPCONF so it can't connect to HDFS, but there is no HADOOPCONF in hive-common-1.2.1.jar any more, I found previous version of hive-common has the HADOOPCONF.
http://www.docjar.com/html/api/org/apache/hadoop/hive/conf/HiveConf.java.html
My question is how can I config to access HDFS(namenode HA) with HiveConf class by using hive-common-1.2.1.jar?
Here is the error:
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: cluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
My code is:
hiveConf.setVar(HiveConf.ConfVars.HADOOPBIN, "/opt/modules/hadoop/bin");
hiveConf.setVar(HiveConf.ConfVars.HADOOPFS, "hdfs://cluster");
hiveConf.setVar(HiveConf.ConfVars.LOCALSCRATCHDIR, "/opt/modules/hive/temp");
hiveConf.setVar(HiveConf.ConfVars.DOWNLOADED_RESOURCES_DIR, "/opt/modules/hive/temp");
hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
hiveConf.setVar(HiveConf.ConfVars.METASTOREWAREHOUSE, "/warehouse");
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://127.0.0.1:9083");
hiveConf.setVar(HiveConf.ConfVars.METASTORE_CONNECTION_DRIVER, "com.mysql.jdbc.Driver");
hiveConf.setVar(HiveConf.ConfVars.METASTORECONNECTURLKEY, "jdbc:mysql://192.168.5.29:3306/hive?createDatabaseIfNotExist=true");
hiveConf.setVar(HiveConf.ConfVars.METASTORE_CONNECTION_USER_NAME, "hive");
hiveConf.setVar(HiveConf.ConfVars.METASTOREPWD, "123456");
hiveConf.setVar(HiveConf.ConfVars.HIVEHISTORYFILELOC, "/opt/modules/hive/temp");
OK, I resolved this issue.
Because the class HiveConf in hive-common jar load "hdfs-site.xml" from hadoop by default, if only you set the classpath pointed to the folder of "hdfs-site.xml" when you running it.
CLASSPATH=$CLASSPATH:/opt/modules/hadoop/conf
$JAVA -cp $CLASSPATH com.baofeng.data.writer.HiveHcatalogWriter

ERROR 1066: Unable to open iterator for alias - Pig

Just started Pig; trying to load the data from a file and dump it henceforth. Loading seems to be proper, no error is thrown. Below is the query:
NYSE = LOAD '/root/Desktop/Works/NYSE-2000-2001.tsv' USING
PigStorage() AS (exchange:chararray, stock_symbol:chararray,
date:chararray, stock_price_open:float, stock_price_high:float,
stock_price_low:float, stock_price_close:float, stock_volume:int,
stock_price_adj_close:float);
When I try to do the Dump, it throws the following error:
Pig Stack Trace
ERROR 1066: Unable to open iterator for alias NYSE
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias NYSE
at org.apache.pig.PigServer.openIterator(PigServer.java:857)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:682)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:303)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:490)
at org.apache.pig.Main.main(Main.java:111)
Caused by: java.io.IOException: Job terminated with anomalous status FAILED
at org.apache.pig.PigServer.openIterator(PigServer.java:849)"
Any idea what's causing the issue?
Are you running a pig 0.12.0 or earlier jar against hadoop 2.2, if this is the case then
I managed to get around this error by recompiling the pig jar from src, here is a summary of the steps involved on a debian type box
download the pig-0.12.0.tar.gz
unpack the jar and set permissions
then inside the unpacked directory compile the src with 'ant clean jar -Dhadoopversion=23'
then you need to get the jar on your class-path in maven, for example, in the
same directory
mvn install:install-file -Dfile=pig.jar -DgroupId={set a groupId}-
DartifactId={set a artifactId} -Dversion=1.0 -Dpackaging=jar
or if in eclipse then add jar as external libary/dependency
I was getting your exact trace trying to run pig 12 in a hadoop 2.2.0 and the above steps worked for me
UPDATE
I posted my issue on the pig jira and they responded. They have a pig jar already compiled for hadoop2 pig-h2.jar here http://search.maven.org/#artifactdetails|org.apache.pig|pig|0.12.0|jar
a maven tag for this jar is
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pig</artifactId>
<classifier>h2</classifier>
<version>0.12.0</version>
<scope>provided</scope>
</dependency>
This could be due to a change in the Pig Version starting 0.12. The specific change is that Pig used to be permissive and automatically ignore the first line in the data file or it would interpret that line as column names, in the new version of Pig this permissiveness was removed. The work around is to delete the column names from the input file and this should solve the problem
Kapil
I also meet this problem. And then I see this link: http://www.fanli7.net/a/JAVAbiancheng/ANT/20140325/441264.html
I just replace pig version from 0.12.0 to 0.13.0 and the problem is solved. (Here, my hadoop version is 2.3.0)
You can place breakpoint to class PigServer to method store().
for(JobStats js : stats.getJobGraph()){
if(js.getException() != null) {
ex = js.getException();
}
}
Inside the js object there is field errorMessage and it may contain description of the problem

NullPointerException org.gradle.wrapper.BootstrapMainStarter.findLauncherJar(BootstrapMainStarter.java:37)

Got the following stacktrace when launching gradle 1.1, anyone know how to resolve them:
Exception in thread "main" java.lang.NullPointerException
at org.gradle.wrapper.BootstrapMainStarter.findLauncherJar(BootstrapMainStarter.java:37)
at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:28)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:130)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:47)
I think the automatic unzip of the dists/gradle-1.1-bin/13d7lnhcrghv2i5e54el41jpgr/gradle-1.1-bin.zip might be failing. I checked permissions and that I have access to that directory.
If I unzip manually, then I get the following error:
Exception in thread "main" java.lang.RuntimeException: Gradle distribution 'http://services.gradle.org/distributions/gradle-1.1-bin.zip' contains too many directories. Expected to find exactly 1 directory.
at org.gradle.wrapper.Install.createDist(Install.java:73)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:129)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:47)
I did a google search for gradle nullpointerexception and it mentioned the JAVA_HOME needs to be set for compiling, but I've already checked it is set correctly and been able to compile stuff with ant in that environment.
I was getting exactly same error and I changed the version of gradle that I was using. Inside my gradle-wrapper.properties, changed version 2.4 to 2.2.1 and error is gone.