Can my requirements be met with JMX? - weblogic

I am completely new to JMX. I have a specific requirement and wanted to know if it is possible to accomplish within the scope of JMX.
Requirements:
I have a set of resources which include many weblogic instances, jBoss instances and Tomcat instances running across many servers. Now I need a one stop solution, UI to monitor these resources, check their current status and if they are down, I need to start and stop them from that webpage.
Is this possible using JMX?

You could use nagios combined with check_jmx to monitor (create statistics)
and may trigger a restart of a resource. (I'm not sure if can trigger a restart direct via JMX)

Check out Jopr, http://www.jboss.org/jopr/

jmx4perl comes with a full featured Nagios Plugin check_jmx4perl for access JMX information. It comes with a set of preconfigured check for various resources, currently for JBoss, Tomcat and Jetty (more are in the pipeline).

Related

Mule ESB Instance Monitoring

what is the best way to monitor the Mule ESB instances. Is there a way i can get alerted when my mule instance goes down for some reason. I have 4 instances of Mule running and how will I come to know if 1 of them got down due to some reason.
Thanks!
I assume you are running community edition? (Enterprise edition provides a Management Console which allows you to define alerts). If you are using CE, then you are able to enable JMX monitoring on the instances and then use one of many ways to verify based on JMX info, whether your server is running. One way is to write your own application that retrieves JMX data programmatically and act accordingly.
HTH
If you are using Mule EE, you can use MMC to monitor all your instances as Gabriel has already suggested. My suggestion would be to install MMC inside tomcat on a separate server. This is to ensure that even if your Mule Server crashes or goes down, your MMC is still running and can send you alerts about your Mule server downtime. You can refer below link for details on how to setup server down and up alerts.
https://developer.mulesoft.com/docs/display/current/Working+With+Alerts
Additionally I would recommend to use MMC with database persistence to ensure you have ability to recover MMC workspace even if your MMC server crashes. You can refer about MMC setup with DB persistence at below link.
https://developer.mulesoft.com/docs/display/current/Configuring+MMC+for+External+Databases+-+Quick+Reference
If you don't have Mule EE, you may want to explore other tools or customer alerting applications as suggested by Gabriel.
HTH
You can set up a JMX agent by adding the following lines into your "conf/wrapper.conf" file :
wrapper.java.additional.19=-Dcom.sun.management.jmxremote
wrapper.java.additional.20=-Dcom.sun.management.jmxremote.port=10055
wrapper.java.additional.21=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.22=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.23=-Djava.rmi.server.hostname=127.0.0.1
don't forget to change the values accordingly. Also you can implement SSL authentication with a few extra lines.
Now once your monitoring platform is set up you can always activate Java pollers and start the server.

ElasticSearch: Is there any application that enable access management to ElasticSearch?

I'm running an ElasticSearch cluster in development mode and want it to be production ready.
For that, I want to block all the unnecessary ports, one in particular is port 9200.
The problem is that I will not e able to monitor the cluster with HEAD or Marvel plugin.
I've searched around and saw that ElasticSearch recommendation is to put the entire cluster behind an application that manages the access to the cluster.
I saw some solutions (ElasticSearch HTTP basic authentication) which are insufficient for this matter.
Is there any application that can do it?
Elasticsearch actually have a product for this very purpose called Shield. You can find it here.

Running multiple projects on Mule

Is it possible to run multiple Mule projects on the same port, and if so how to do it? Cause the only thing i can do at the moment is run multiple flows in one project, and the idea is to have multiple projects running on the same port with different paths so i can do a wrong configuration (causing undeploy) and still have the others running when that goes down.
Yes, it is possible since Mule 3.5.0-M4, but you'll have to wait for a few days to try it on a productive version like 3.5.0. You need to create a shared http connector in a domain and reference that connector from your apps.
Yes, you can run multiple projects with same port number provided that the paths are different.

Detect cluster node failure Jboss AS 7.1.1-Final

I have configured 2 node clusers in Jboss AS 7.1.1-Final. I am planning to use sticky sessions. Meanwhile I am also recording number of active online users in Infinispan cache with node IP from where that user session was created for reporting purpose.
I have taken care of scenarios for login/logout where I would clear our cache entries. Problem is if one of the server node goes down, I need to write clean up routine to clear such records of that node from cache too.
One of the option is to write a client and check at specific interval if server is alive otherwise trigger a clean up routine. This approach would work but I am looking for more cleaner approach if I could detect server node failure that gets notified to other live nodes then I could hit cleanup.
From console I know that it shows when server goes down or comes up. But what would be that listerner to listen to such events. Any thoughts?
If you just need to know when the node leaves within some server module (inside JBoss server) you can use the ViewChanged listener
You cannot get this information on clients connected via REST or memcached protocols - with HotRod protocol it is doable but pretty hackish, you'd have to override TransportFactory.updateServers (probably just extend TcpTransportFactory - see configuration property infinispan.client.hotrod.transport_factory)

What are the most effective tools to manage multiple apache httpd instances?

We have many Apache instances all over our intranet. Some instances run on the same machine. Some instances run on different machines.
I need a tool that can manage these instances from one central location.
Get CPU stats
Get Connection stats
Stop/start Apache instances
Get access to error log
I looked at webmin, but the documentation isn't too clear how it works. Without installing it I'd have trouble getting it to go.
Any recommendations?
I've never used it myself, but I've seen people with monitoring requirements be very happy with Cacti. Besides general health monitoring like CPU stats it has an extremely simple Apache stats plugin that might do what you need:
Script to get the requests per second and the requests currently being processed from
an Apache webserver.
maybe you can put something together with that.