i want to start ignite node with a configuration name as example-igfs.xml. i have alter this configuration for using IGFS as cache layer for HDFS. but when i execute the below command for start ignite node i encounter with error:
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:361)
at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:374)
at org.apache.hadoop.conf.Configuration.(Configuration.java:456)
at org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.safeCreateConfiguration(HadoopUtils.java:334)
at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.start(HadoopBasicFileSystemFactoryDelegate.java:129)
java.lang.NoClassDefFoundError error usually comes when ignite can't find required libraries(Jars).
In your case, you have to move JARs to $IGNITE_HOME\libs folder.
Create a folder in libs directory, let's say hadoop-libs and move all all required JARs to this folder.
I am not expert of hadoop but it seems that you are missing hadoop client and its dependent google guava libraries.
Related
I have problem with checkpoint by s3p in the flink of EMR.
When creating the EMR cluster, I have a tick in Presto and added jar file as instructed at https://ci.apache.org/projects/flink/flink-docs-stable/ops/plugins.html.
But when checking point by s3p in flink, it still reports
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3p'. The scheme is directly supported by Flink through the following plugin: flink-s3-fs-presto. Please ensure that each plugin resides within its own subfolder within the plugins directory. See https://ci.apache.org/projects/flink/flink-docs-stable/ops/plugins.html for more information. If you want to use a Hadoop file system for that scheme, please add the scheme to the configuration fs.allowed-fallback-filesystems. For a full list of supported file systems, please see https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
Can you help me checkpoint s3p on the flink of EMR?
Thanks.
Presto in EMR has nothing to do with the flink-s3-fs-presto plugin in Flink. You can leave it unticked in the future (doesn't hurt either except blowing things up).
The most likely reason is that you forgot to create a subfolder in the plugins folder. Could you give me an ls of your Flink distribution?
I am trying to start Ignite cluster from the command line on windows:
this is what I did:
Download Ignite binary version and kept it in C driver.
Set Environment Variable IGNITE_HOME to that folder location.
in command line I open the directory:
C:\apache-ignite-fabric-2.2.0-bin\bin
the from that directory :
C:\apache-ignite-fabric-2.2.0-bin\bin>sh ignite.sh examples/config/example-ignite.xml
I am getting the following error:
Failed to create Ignite component (consider adding ignite-spring module to classpath) [component=SPRING, cls=org.apache.ignite.internal.processors.spring.IgniteSpringProcessorImpl]
what can be the reason for this error?
found the solution for that:
need to run it in bat file and not sh file:
C:\apache-ignite-fabric-2.2.0-bin\bin>ignite.bat examples/config/example-ignite.xml
If you're on Windows I imagine you should try ignite.bat?
ignite.sh might have problems with classpath when run on Windows, that would explain it.
I did all the setup for oozie 4.3.0 on Apache hadoop single node cluster, when tried running any standard example workflow.xml that comes with oozie, it is throwing below error.
WARN ActionStartXCommand:523 - SERVER[data01.teg.io] USER[hadoop] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-161215143751620-oozie-hado-W] ACTION[0000000-161215143751620-oozie-hado-W#mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
I looked at the parameter "mapreduce.framework.name" and it is set to yarn everywhere in all config files. I checked Sharelib is created properly and can see when queried with shareliblist command, i dont see where exactly the problem is. Tried every solution came up in google and could not solve it even after struggling for 2 days with it.
I can start and stop oozie daemon with out any problem.
Any insights are greatly helpful.
I figured out the solution. Unlike the prior versions of Oozie before 4.x.x, 4.3.0 does not generate hadoop-libs.jar file when we run the build command.
In the beginning, i copied jar files only from my hadoop's
/srv/hadoop-2.7.3/share/hadoop/common to oozie's libext folder. After i copied jar files from all the below paths to oozie's libext folder, i was able to successfully setup the Oozie.
/srv/hadoop-2.7.3/share/hadoop/common/*.jar
/srv/hadoop-2.7.3/share/hadoop/common/lib/*.jar
/srv/hadoop-2.7.3/share/hadoop/hdfs/*.jar
/srv/hadoop-2.7.3/share/hadoop/hdfs/lib/*.jar
/srv/hadoop-2.7.3/share/hadoop/mapreduce/*.jar
/srv/hadoop-2.7.3/share/hadoop/mapreduce/lib/*.jar
/srv/hadoop-2.7.3/share/hadoop/yarn/*.jar
/srv/hadoop-2.7.3/share/hadoop/yarn/lib/*.jar
Relatively new to Apache OOZIE and did an installation on Ubuntu 14.04, Hadoop 2.6.0, JDK 1.8. I was able to install oozie and the web console is visible at the 11000 port of my server.
Now while i copied the examples bundled with oozie and tried to run them i am running into an error which says no sharedlib exists.
Installed the sharedlib as below-
bin/oozie-setup.sh sharelib create -fs hdfs://localhost:54310
(my namenode is running on localhost 54310 and JT on localhost 54311)
hadoop fs -ls /user/hduser/share/lib is showing shared library created as per the oozie-site.xml file. However when i check the shared library using the command -
oozie admin -oozie http://localhost:11000/oozie -shareliblist the list is blank and also jobs are failing for the same reason.
Any clues on how should i approach this problem?
Thanks.
The sharelib create command looks fine.
If you havent done so already copy the core-site.xml from your hadoop installation folder into $OOZIE_HOME/conf/hadoop-conf/.
There might already be a "placeholder" core-site.xml in the hadoop-conf folder, delete or rename that one. Oozie doesnt get its hadoop configuration directly from your hadoop install (like hive for example) but from the core-site.xml you place in that hadoop-conf folder.
Okay i got a solution for this.
So when i was trying to create the sharedlib directory it was doing on HDFS but while running the job local path was being refereed. So i extracted the oozie-sharedlib tar.gz file in my local /user/hduser/share/lib directory and its working now.
But did not get the reason so its still an open question.
I have encountered the same issue and it turned out that
oozie was not able to communicate with hdfs, as it was not able to find the location for core-site.xml or any other hadoop configuration which has to be declared inside oozie-site.xml.
Corresponding property in oozie-site.xml is oozie.service.HadoopAccessorService.hadoop.configurations
this property was defined wrongly in my case.
changed it to point to where my Hadoop configuration xmls are present and then it started communicating with hdfs and hence was able to locate the sharelib on hdfs
I have a Maven project which generates a 413.06 KB jar file. I have to deploy it on Apache Archiva based managed repository. I have tried to deploy different versions, and it created required layout and structure, uploaded some files, even it uploaded that jar with 200~ KB. every time the jar file size changes but always it fails to upload 413.06 KB jar file.
Information:-
I am running standalone Archiva
I have given guest account to Global Repository Manager & "Repository Manager - MYREPO"
I have also tried a separate account in Archiva with "Repository Manager - MYREPO" rights and configured it in maven's settings.xml file to set custom timeout.
I am getting following error
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy
(default-deploy) on project SharedshelfRepository: Error deploying artifact: Transfer error:
The server did not respond within the configured timeout. -> [Help 1]
that might be maven-deploy-plugin issue, resources plugin itself needs several dependencies,try manually jar nad p
What version of Maven are you using? You might try 3.0.4 as it has a different HTTP library. I'm also not sure if there's more context for what was happening when it timed out (it seems more request oriented rather than deploy oriented, and deploy does request some metadata).
I can't see that you'd need to alter the timeout, as none of the defaults should apply to such a small file. How long does it take to fail?