Accessing Hadoop clusters from eclipse - eclipse-plugin

I just followed the Hadoop(0.20.2) installation tutorial and did the set up. I can run map reduce program on the cluster through eclipse. Now my problem is how can I connect to Hadoop clusters from my local system. Local system is windows 7 and I have installed eclipse plugin for Hadoop. I was trying to connect to Hadoop from my local system which is windows(My local system and Hadoop system are in same subnet). I got connection timed out error while connecting to Hadoop server.
In configuration files of Hadoop I have given actual IP addresses.
Not sure which step I have missed out?

I recently read, that the eclipse plugin won't work at all. But you can simply connect to your Cluster with the configuration keys:
mapred.job.tracker
fs.default.name
EDIT: here is a working version Apache Jira: Eclipse Plugin does not work with Eclipse Ganymede (3.4)

Related

Weblogic Adapter in Eclipse Neon missing

I have installed Weblogic 12.2.1 using the following tutorial successfully on my Mac OS Sierra 10.12.6
I am now trying to configure weblogic in my Eclipse Neon. I am not able to see any server adapters that others see in different tutorials.
After some reading online I installed OEP 12.2.1.6 software and restarted eclipse but it of no help
LIST OF SERVERS ADAPTERS I SEE IN TUTORIALS
LIST I AM SEEING AFTER INSTALLING OEPE
I need those adapters to add weblogic to my eclipse.
I had the same question some days ago after checking some online tutorials.
In the later versions of OEPE, you only see a "unified" oracle weblogic server adapter.
Once you have selected this option, you will be able to choose the desired server runtime environment from a list of the runtimes configured on your machine (separate download).
If you see none, then you will have to configure some before trying to add a weblogic server.

Does cloudera distribution of hadoop use the control scripts?

Are the control scripts (start-dfs.sh and start-mapred.sh) used by CDH to start daemons on the fully distributed cluster?
I downloaded and installed CDH5, but could not see the control scripts in the installation, and wondering how does CDH start the daemons on slave nodes?
Or since the daemons are installed as services, they do start with the system start-up. Hence there is no need for control scripts in CDH unlike apache hadoop.
Not as such, no. If I recall correctly, Cloudera Manager invokes the daemons using Supervisor (http://supervisord.org/). So, you manage the services in CM. CM itself runs agent processes as a service on each node, and you can find its start script in /etc/init.d. There is no need for you to install or start or stop anything. You install, deploy config, control and monitor servies in CM.

How to connect JProfiler to Virgo Server running in remote linux machine

Please help me how to connect JProfiler from windows machine to remote Virgo Jetty Server which is running in linux server.
Below are the steps I am following
From Choose Integration Wizard selecting Eclipse Virgo(Next)
Then I am selecting option of on remote computer with Linux platform(Next)
Then I am selecting JVM vendor Version etc (Next)
selecting option Wait or a connection from JProfiler GUI(Next)
Providing remote hostname:port(Next)
I was stuck at specifying remote installation directory
Here we didnt install JProfiler in our linux remote environment but we have server running there.I have seen option like If JProfiler is not installed,you can create archive and that contains profiling agent and extractit in above directory.Asking folder where to create Archive.
Can you please help what exactly this means what I need to do to create archive .Only thing I have done is installed JProfiler evaluation version in local machine and profiling local server.
Please help and let me know any additional information is required..Thanks in Advance..
If you select the option to create an archive in the integration wizard, JProfiler will create a .tar.gz file that contains the libraries for the profiling agent. You transfer that archive to the Linux server and extract it somewhere, e.g. to /home/myname/jprofiler by calling
mkdir /home/myname/jprofiler
cd /home/myname/jprofiler
tar xzvf /path/to/jprofiler_agent_linux-x64.tar.gz
In the integration wizard, specify /home/myname/jprofiler as the remote installation directory.

how to run openlaszlo 4.9.0 in ubuntu12.04?

I am Using Ubuntu 12.04 and i want to run OpenLaszlo 4.9.0 in my system. I have read many tutorial, e.g.
http://wiki.openlaszlo.org/Installing_OpenLaszlo#Installing_the_DevKit_on_Unix.2FLinux
that say that put server in JAVA_HOME but i do not know where is JAVA_HOME in Ubuntu 12.04.
I have OpenLaszlo also . But I do not know how to start server of OpenLaszlo and where to put it? or how many things required for it? please tell me. I have Red5 server,i have install java-7-openjdk.
Thanks in advance.
JAVA_HOME is an environment variable. It stores the path to java runtime environment (jre). You can have several JVMs installed on your system, of course. So JAVA_HOME defines the default one.
Setting this variable after installing Ubuntu package from the repository is a little tricky. It is discussed, for example, here:
Jenkins, specifying JAVA_HOME,
What is the correct target for the JAVA_HOME envrionment variable for a Linux OpenJDK Debian-based distribution?
OpenLaszlo is a Web-application that should be run under some application server (usually Apache TomCat or its derivatives such as IBM WebSphere Application Server Community Edition).
It is available on the off.site as a bundle that includes TomCat and also as a .war file (a servlet) that should be deployed under your application server.
In the 1st case you can extract an archive wherever you want (read carefully about file permissions). But at the moment the server starts it needs Java system files so JAVA_HOME should be already defined.

Is eclipse plugin for hadoop work with CDH3

I install the cloudera CDH3 on my machine. Then I try to use eclipse plugin (JIRA MAPREDUCE-1280) to do some MR tasks. However, it seems like the plugin not work with CDH3 for some reason. It cannot connect to the DFS.
Does any get the plugin working?
CDH3 is incompatible to Apache Hadoop 0.20.2.
The Eclipse-Plugin from JIRA MAPREDUCE-1280 is built against Apache Hadoop.
It is not compatible with CDH3.