How to monitor hadoop cluster using Ambari on centos 7 - apache

I have a small hadoop cluster i.e one master and three slave nodes. I have to monitor cluster. I have found that we can use Ambari. CentOS 7 is installed on all machines. Please provide a complete details how I can do that ?. I have found that Ambari can be used for new cluster i.e you have to install new cluster. It does not work with already running cluster?

At the moment Ambari does not support CentOS 7, so that's not going to work.
However, Ambari does not perform cluster monitoring on its own. It uses Nagios for the purpose. Nagios is an independent software project that you can setup independently. That said it's kinda painful to do.

ambari-server for Ambari 2.2+ can be installed and works good on CentOS 7.
You have to installed ambari-server on one of the hosts (master node) and can use the webUI hostname:8080 for installing ambari agents on other hosts. Alternatively, ambari agents can be installed manually on other hosts can can be linked to communicate with the ambari-server.

Related

Redis "This instance has cluster support disabled" error?

I'm using WSL. I installed Redis through apt-get, built it from source, and installed it from a PPA. In all 3 cases, I get This instance has cluster support disabled. I have cluster-enabled yes in the config file. I restarted everything. How do I enable cluster support?
I think in addition to setting cluster-enabled to yes you have to actually configure a cluster. See the Redis cluster tutorial, specifically the section "Creating the cluster"

Redis HA using Docker & Kubernetes (1 Master 2 slaves) Ubuntu 16.04

I'm trying to find a viable solution using Redis as a Master/Slave(at least 2 slaves) configuration. I have Docker containers with Ubuntu 16.04 OS & Redis server/sentinel installed (latest stable).
I'm not looking for a clustered setup. I would like to have the master redis db on one pod, and the slaves on their own pod (all three will be on separate vm's or physical boxes). I'll want to use yaml/Kubernetes nodeSelector to assign where they can spin up.
From my research, it appears I want to run Redis Sentinel services on each pod as well. They key here as well is I want to specify where each Master/Slave POD can run. I've investigated https://github.com/kubernetes/kubernetes/tree/master/examples/redis but that does not give me the control I want. Maybe Redis 4.x helps, but I can't find any examples. Any pointers would be appreciated. I've searched all over this site for an answer w/o any luck.

Is it possible to install Apache Ambari on top of an existing cluster

We have an existing Hadoop cluster that is not managed by Ambari. Is it possible to install Apache Ambari on top of an existing Hadoop cluster?
No, Ambari must provision the cluster it's monitoring.
Ambari is designed around a Stack concept where each stack consists of several services. A stack definition is what allows Ambari to install, manage and monitor the services in the cluster.

Change install directory for Apache Ambari nodes/hosts

I have 3 virtual machines that I want to add to an Ambari cluster.
I'm going through the setup wizard to do this.
My VMs have less than 2GB of space in the drive mounted on /
Ambari complains about this.
Is there any way to tell Ambari that I want it to use a different location?
I would like to tell Ambari to use the /data location for each host.
Any help will be greatly appreciated.
Assuming you're attempting to install an HDP stack. No. You will need more space on the root partition. All of the services (Spark, HBase, HDFS, Yarn, Oozie, etc.) are installed into /usr/hdp/<version>. This is not overridable since each of these services is installed using an rpm and the default location in which those rpms install into is hard coded throughout several of the service descriptors that help provision the cluster.

Hadoop Cluster deployment using Apache Ambari

I have listed few queries related to ambari as follows:
Can I configure more than one hadoop cluster via UI of ambari ?
( Note : I am using Ambari 1.6.1 for my hadoop cluster deployment purpose and I am aware that this can be done via Ambari API, but not able to find via ambari portal)
We can check the status of services on each node by “jps” command, if we have configured hadoop cluster w/o ambari.
Is there any way similar to “jps” to check from back end if the setup for hadoop cluster was successful from the backend ?
( Note : I can see that services are showing UP on the ambari portal )
Help Appreciated !!
Please let me know if any additional information is required.
Thanks
The ambari UI is served up by the ambari server for a specific cluster. To configure another cluster you need to point your browser to the URL for that other cluster's ambari server. So you can't see the configuration for multiple servers on the same web page, but you can set browser bookmarks to jump from configuration to configuration.