Configure VisualVM to visualize multiple JVMs - jvm

I would like to use VisualVM to monitor a cluster of JVMs, say 50 - 100 processes.
Is there a way to configure VisualVM to monitor to a specified list of JVMs on startup without adding them manually?

You can run jstatd on each of the machines, and then you will need just to connect to the machines - all processes will be added automatically.

Related

How to run multiple ignite clusters on same network

I have several machines on my intranet. If I switch on ignite on two of them, they automatically discover each other and become part of a single cluster. If I start ignite on a third machine, it automatically connects to the cluster.
How can I prevent this.
Basically, I want to run two clusters of Ignite on a single network. I have two testing environments, I want separate Ignites for both these environments.
I suppose that you're using TcpDiscoveryMulticastIpFinder in TcpDiscoverySpi configuration.
It's possible to archive network isolation, but you should use TcpDiscoveryVmIpFinder instead of TcpDiscoveryMulticastIpFinder. The example of configuration could be found here https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder.

NiFi Site-to-Site Data Flow is Slow

I have multiple standalone NiFi instances (approx. 10) that I want to use to send data to a NiFi cluster (3 NiFi instances) using RPG (Site-to-Site). But, the flow from the standalone instances to the cluster seems to be slow.
Is this the right approach?
How many Site-to-Site Connections does NiFi allow?
Are there any best practices for Site-to-Site NiFi Data Flow?
You may want to first rule out your network. You could ssh to one of the standalone nodes and then try to SCP a large file from the standalone node to one of the nodes in the NiFi cluster. If that is slow then it is more of a network problem and there won't be much you can do to make it go faster in NiFi.
In NiFi, you can tune each side of the site-to-site config...
On the central cluster you can right-click on the remote Input Port and configure the concurrent tasks which defaults to 1. This is the number of threads that can concurrently process data received on the port.
On the standalone NiFi instances you can also configure the concurrent tasks used to send data to a given port. Right-click on the RPG and select "Manage remote ports", and then change the concurrent tasks for whichever port.

How can I configure Apache Zookeeper with redundancy on only two physical frames?

I would like to have a high-availability/redundant installation of Zookeeper running in my production environment. The problem is that I only have 2 physical frames available, so that rules out configuring a Zookeeper cluster/ensemble since I'd only have redundancy if the frame with the minority of servers goes down. What is the best practice in this situation? Is it possible to have a separate standalone install running on each frame connected to the same set of SOLR nodes or to use one server as primary and one as backup?
Zookeeper requires 3 nodes. In your scenario if you cannot get another machine you can setup multiple zookeeper nodes on the same machine in different directories using different ports.

DC/OS has three roles, they are master, slave, slave_public, why can't put them on one host?

I just investigate DC/OS, I find that DC/OS has three roles:master, slave, slave_public, I want to deploy a cluster which can host master, slave or slave_public roles on one host, but currently I can't do that.
I want to know that why can't put them on one host when designed. If I do that, could I get some suggestions?
I just have the idea. If I can't do, I'll quit using DCOS, I'll use mesos and marathon.
Is there someone has the idea with me? I look forward to the reply.
This is by design, and things are actually being worked on to re-enforce that an machine is installed with only one role because things break with more than one.
If you're trying to demo / experiment with DC/OS and you only have one machine, you can use Virtual Machines or Docker to partition that one machine into multiple machines / parts which you can install DC/OS on. dcos-vagrant and dcos-docker can help you there.
As far as installing though, the configuration for each of the three roles is incompatible with one another. The "master" role causes a whole bunch of pieces of software to be started / installed on a host (Mesos-DNS, Mesos master, marathon, exhibitor, zookeeper, 3dt, adminrouter, rexray, spartan, navstar among others) which listen on various ports. The "slave" role causes a machine to have a mesos-agent (mesos renamed mesos-slave to mesos-agent, hence the disconnect) configured and started on the agent. The mesos-agent is configured to control / most ports greater than 1024 to tasks which are launched by mesos frameworks on the agent. Several of those ports are used by services which are run on masters, resulting in odd conflicts and hard to fix bad behavior.
In the case of running the "slave" and "slave_public" on the same host, those two conflict more directly, because both of them cause mesos-agent to be run on the host, with slightly different configuration. Both the mesos-agent (the one configured with the "slave" role and the one with the "slave_public" role are configured to listen on port 5051. Only one of them can use it though, so you end up with one of the agents being non-functional.
DC/OS only supports running a node as either a master or an agent(slave). You are correct that Mesos does not have this limitation. But DC/OS is more than just a Mesos/Marathon. To enable all the additional features of DC/OS there are various components built around Mesos and Marathon. At times these components behave differently whether they are running on a master or an agent and at other times the components that exist on a master may or may not exist on an agent or vice versa. So running a master and an agent on the same node would lead to conflicts/issues.
If you are looking to run a small development setup before scaling the solution out to a bigger distributed system DC/OS Vagrant might be a good starting point.

2 ActiveMQ Servers different versions same machine

I want to know if it is possible to run 2 versions (5.5 and 5.10) of ActiveMQ on the same machine. I simply assume that all I need to do reconfigure the ports on one of them to something different to the other.
The reason for this is that we are using Informatica B2B which uses ActiveMQ # 5.5 with a 3rd party (Fuse) addition for its internal messaging. We would also like to run a separate JMS server on the same machine for various reasons using 5.10 or 5.11.
I have found lots of examples of creating multiple instances, but they apply to using the same installation.
If it is that simple (as just changing the ports), can they also share the same JVM or not?
You can run multiple instances on the same machine by changing the ActiveMQ configuration. You should assign each Broker a unique name and configure the transport connector to listen on different ports. You also want to ensure that they are configure with different data directory instances and so on.
You cannot run two in the same JVN however.