I install the cloudera CDH3 on my machine. Then I try to use eclipse plugin (JIRA MAPREDUCE-1280) to do some MR tasks. However, it seems like the plugin not work with CDH3 for some reason. It cannot connect to the DFS.
Does any get the plugin working?
CDH3 is incompatible to Apache Hadoop 0.20.2.
The Eclipse-Plugin from JIRA MAPREDUCE-1280 is built against Apache Hadoop.
It is not compatible with CDH3.
Related
From Zeppelin-0.7, Zeppelin started supporting Helium plugins/packages using Helium Framework. However, I am not able to view any of the plugin on Helium page (localhost:8080/#/helium). As per this JIRA, I placed sample Helium.json (available on s3) under /local-repo/helium-registry-cache. However, after that I got NPE while restarting Apache Zeppelin service.
I have tried Zeppelin 0.7 as well as Zeppelin 0.8.0 snaptshot versions. In particular, I want to use map Helium package - Helium-Map in Zeppelin note.
Can some one point me to any guide or documentation having detailed steps of using Helium package in Zeppelin? Any help would be greatly appreciated!
Zeppelin 0.7.x
Zeppelin 0.7.x doesn't support the online registry. In other words,
Zeppelin doesn't use helium.json. So you need to install each package by yourself.
clone the helium package what you want to install
modify the artifact value to the absolute path considering your local machine in helium-xxx.json
copy zeppelin-xxx.json into the $ZEPPELIN_HOME/helium directory (create if it doesn't exist yet)
restart Zeppelin and go to the localhost:8080/#/helium page, then install the package.
Zeppelin 0.8.0-SNAPSHOT
Zeppelin 0.8.0-SNAPSHOT supports the online registry. So you can install without any preparation.
But the NPE problem you've faced was fixed after https://github.com/apache/zeppelin/pull/2380.
So please git pull origin master and rebuild it :)
FYI, Now Zeppelin provides proxy functionality for helium. Refer https://github.com/apache/zeppelin/pull/2363
I work with Cloudera Manager CDH 5.7.1, which supports only Hive 1.1.0.
NiFi 1.0.0-BETA uses Hive 1.2.1.
When I try to use SelectHiveQL processor, I get the following error: Required field 'client_protocol' is unset!, which means that there's a version mismatch between Hive client and server.
Any suggestions to solve this problem?
I thought about building NiFi with hive-jdbc dependency version 1.1.0 instead of the default 1.2.1, but I hope there's a better solution.
Since NiFi is an Apache project, it builds with Apache JARs (such as Hive and Hadoop). However there are vendor-specific profiles and build properties you can use to build NiFi for a particular Hadoop distribution.
For example you could try the following to build a NiFi distro for CDH 5.7.1:
mvn clean install -DskipTests -Pcloudera -Dhadoop.version=2.6.0-cdh5.7.1 -Dhive.version=1.1.0-cdh5.7.1 -Dhbase.version=1.2.0-cdh5.7.1
The Hive processors use Hadoop libraries provided by the NiFi Hadoop Libraries NAR, and other NARs (like the Hadoop/HDFS processors) use this same libraries NAR, so the best approach is to build the whole thing. Otherwise you can try to replace just the Hadoop/Hive/HBase-related NARs and see if that works.
Because NiFi expects the newer version of Hive, it is necessary to remove the unsupported newer features (such as HiveStreaming and ORC support), support the older version of Thrift, and build against the Cloudera-specific libraries.
I have created a branch of the current NiFi-1.1.x release with the necessary changes to get the PutHiveQL and SelectHiveQL processors to work, which you could build as below:
git clone https://github.com/Chaffelson/nifi.git
git checkout nifi-1.1.x-cdhHiveBundle
mvn -T C2.0 clean install -Pcloudera -Dhive.version=1.1.0-cdh5.10.0 -Dhive.hadoop.version=2.6.0-cdh5.10.0 -Dhadoop.version=2.6.0-cdh5.10.0 -DskipTests
I have posted a more complete coverage of this on the Hortonworks Community forum: https://community.hortonworks.com/articles/93771/connecting-nifi-to-cdh-hive.html
I am trying to install the plugin for Spring IDE (3.4.0) using Eclipse Marketplace for Eclipse Kepler (4.3). It is giving the error 'Will not be installed (Spring IDE Roo Support)'.
I also checked the compatibility of Spring IDE version 3.4.0 with Eclipse Kepler 4.3, and it is compatible. Any ideas on how to complete the installation?
Thank you
Prachi
It seems there is some errors on versions 3.4.0 and 4.3.0 (https://marketplace.eclipse.org/content/spring-tool-suite-sts-eclipse-kepler-43).
Yoy could try latest releases from http://spring.io/tools/eclipse, or install STS http://spring.io/tools/sts (imho STS is the best option).
I had the same problem. It seemed to get hung in the provisioning step (83%). I installed the STS and same problem... but I waited a little longer this time and it was successful. This gave me everything I was expecting in the IDE plugin
I am just trying from https://github.com/spring-projects/sts4/wiki/Installation using Neon 2. I have had to stop and start it over and over.
Am using
Apache Hadoop 1.0.4
JDK 1.7
CentOS 5.6
When I install Hadoop Cluster with 2-nodes, it runs fine. I am able to see the status through http://:50070 and http://:50030. However when I build the Hadoop native libraries from source, the cluster continues to work fine, but the monitoring app (Jetty) goes for a toss. The above links dont work.
I compile the native libraries after Hadoop installation (which is basically tar -zxvf). The namenode is already formatted before the compilation, but the cluster is not brought up until native libraries are compiled. Any ideas, how to troubleshoot this? Am using simple ant -Dcompile.native=true compile-native command to build native libraries.
I just followed the Hadoop(0.20.2) installation tutorial and did the set up. I can run map reduce program on the cluster through eclipse. Now my problem is how can I connect to Hadoop clusters from my local system. Local system is windows 7 and I have installed eclipse plugin for Hadoop. I was trying to connect to Hadoop from my local system which is windows(My local system and Hadoop system are in same subnet). I got connection timed out error while connecting to Hadoop server.
In configuration files of Hadoop I have given actual IP addresses.
Not sure which step I have missed out?
I recently read, that the eclipse plugin won't work at all. But you can simply connect to your Cluster with the configuration keys:
mapred.job.tracker
fs.default.name
EDIT: here is a working version Apache Jira: Eclipse Plugin does not work with Eclipse Ganymede (3.4)