I have 3 virtual machines, each one of them running zookeeper and activemq.
Every time I start ActiveMQ, the ActiveMQ WebConsole starts in a different server. I wanto to start the ActiveMQ WebConsole at the same server everytime, so I don't need to figure out which of them is running the webconsole through the logs.
This is how my jetty.xml is configured:
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8161"/>
</bean>
This is not possible as the embedded web server runs on the broker that is the master.
You can look at alternative web consoles that allows remote management, such as hawtio that can connect to remote servers. You can start hawtio on your local computer, or have it run on some other host, or start it separately on one of those 3 nodes etc.
http://hawt.io/
Running a local Hawt.io like Claus advices is a great option.
If you want to stick with the web console, you can actually have it connect to the current master broker.
You will need to start the console in non embedded mode and set (at least) three system properties. That is, typically this involves deploying the web-console .war inside a Tomcat or similar.
webconsole.jms.url=failover:(tcp://serverA:61616,tcp://serverB:61616)
webconsole.jmx.url=service:jmx:rmi:///jndi/rmi://serverA:1099/jmxrmi,service:jmx:rmi:///jndi/rmi://serverB:1099/jmxrmi
webconsole.type=properties
An old article that discuss using the embedded web consoles for failover as well. I don't know if it applies in all details to current versions.
Related
We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.
It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.
We have a cluster server for MuleSoft. All the applications are deployed correctly and working perfectly but the deployments are showing in red on on-premise MMC.
Each deployment says "Deployment unreconciled" when hover over the deployment. Mule_ee.logs doesn't give anything specific. Where could I find the logs to see why is MMC giving this message.
Which version of Mule MMC you are using - we had the same issue where MuleSoft suggested to upgrade the MMC to the latest version ie 3.8.2. This seems to be a known bug in the MMC version 3.5.2.
The deployment showing red because of the following reasons -
- Servers are shown as down in MMC but they're up and running.
- Alerts for server down are triggered even though the server was never down.
- Any operation over the servers is throwing an error as they're detected as being down.
- Applications may appear as offline when they're actually deployed.
Most of the above listed problems are caused by problems in the network that communicates MMC with the Mule instances. MMC does a ping over the instances which is in charge of updating the servers statuses. In order to avoid the mentioned problems, the frequency of the ping can be changed so it allows the tweaking for a value that suits the current network.
In order to do this you need to the edit the file /WEB-INF/classes/META-INF/applicationContext.xml and modify the bean with id="pinger" to include a new constructor argument of type int (last argument). The value will define the socket timeout of the server status call. Value is expressed in ms-
<bean id="pinger" class="com.mulesoft.mmc.heartbeat.Pinger">
<constructor-arg ref="serverManager" />
<constructor-arg ref="statusServiceAdaptor" />
<constructor-arg ref="eventManager" />
<constructor-arg ref="pingServerExecutor" />
<constructor-arg type="int"><value>10000</value></constructor-arg>
</bean>
After making this change, please restart MMC and check if the servers still experience the same problem.
Note: the default value of the ping is 1000 (one second) for MMC version 3.6.x and 5000 for MMC 3.7.x.
Regards,
Sanjeet Pandey
The yellow icon (for unreconciled) wait until it is deployed.
See documentation below
https://docs.mulesoft.com/tcat-server/v/7.1.0/deploying-applications
I have a curiosity and I was searching for it without any result. In GlassFish documentation it is written:
If the GlassFish Server instance on which the application client is
deployed participates in a cluster, the GlassFish Server finds all
currently active IIOP endpoints in the cluster automatically. However,
a client should have at least two endpoints specified for
bootstrapping purposes, in case one of the endpoints has failed.
but I am asking myself how this list is created.
I've done some tests with a stand-alone client that is executed in a JVM and does some RMI calls on an application that is deployed in a GlassFish cluster and I can see from the logs that the IIOP endpoints list is completed automatically and it is set as com.sun.appserv.iiop.endpoints system property but if I stop a server instance or start another during the execution of the client the list remains the one that was created when the JVM was started.
GlassFish clustering is managed by the GMS (Group Management Service) which usually uses UDP Multicast, but can use TCP where that is not available.
See section 4 "Administering GlassFish Server Clusters" in the HA Administration Guide (PDF)
The Group Management Service (GMS) enables instances to participate in a cluster by
detecting changes in cluster membership and notifying instances of the changes. To
ensure that GMS can detect changes in cluster membership, a cluster's GMS settings
must be configured correctly.
I am running Gemfire HTTP session management model within my application as P2P on a WebSphere. I can see the session logs on WAS. However, I could not find a way to connect it through gfsh from my desktop. I am using default seeting without locator. I would like to monitor Gemfire status, how?
Cache_Peer.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 6.5//EN"
"http://www.gemstone.com/dtd/cache6_6.dtd">
<cache>
<!-- This is the definition of the default session region -->
<region name="gemfire_modules_sessions">
<region-attributes scope="distributed-ack" enable-gateway="false" data-policy="replicate" statistics-enabled="false">
</region-attributes>
</region>
</cache>
As mentioned by Jens, Locator by defaults is a JMX manager. Any locator can become a JMX Manager when started. When you start up a locator, if no other JMX Manager is detected in the distributed system, the locator starts one automatically. If you start a second locator, it will detect the current JMX Manager and will not start up another JMX Manager unless the second locator's gemfire.jmx-manager-start property is set to true.
To turn any other member (p2p server) to JMX manager, set jmx-manager=true and jmx-manager-start=true in the server's gemfire.properties file.
To start the member as a JMX Manager node on the command line, provide --J=-Dgemfire.jmx-manager-start=true and --J=-Dgemfire.jmx-manager=true as arguments to either the start server command.
For example, to start a server as a JMX Manager on the gfsh command line:
gfsh>start server --name=<server-name> --J=-Dgemfire.jmx-manager=true \
--J=-Dgemfire.jmx-manager-start=true
Refer http://gemfire80.docs.pivotal.io/7.0.2/userguide/index.html#managing/management/jmx_manager_operations.html for more details.
By default, the locator in a client-server environment, would be a JMX manager. In a p2p setup you need to enable the JMX manager in one of your servers. You can do this by setting the GemFire properties: jmx-manager-enable=true and jmx-manager-start=true. It is also possible to have multiple JMX managers. If your p2p setup only consists of 2 servers, then having both be JMX managers would be OK.
You can use connect command from gfsh, it connects to the jmx manager.
If you have locator, then connect it using connect --locator=host[port] command, the jmx-manager automatically starts on locator. However, if you don't have locator, then you need to explicitly start jmx-manager on servers and connect it using connect --jmx-manager=host[port] command.
Refer http://gemfire.docs.pivotal.io/latest/userguide/index.html#tools_modules/gfsh/command-pages/connect.html for more details.
If the Gemfire cluster is running behind firewall, then use HTTP to connect, refer http://gemfire.docs.pivotal.io/latest/userguide/index.html#deploying/gfsh/gfsh_remote.html
I'm working on a project where I have several war files inside a tomcat 7 have to communicate with a single embedded activeMQ (5.5.1) broker inside the same Tomcat.
I'm wondering what was the best practice to manage this and how to start and stop the broker properly.
Actually I try tu use a global JNDI entry in server.xml and in each war gets my activemq connection with a lookup. The first connection to the broker implicitly starts it. But with this method I run into various problems like instance already existing or locks in data store.
Should I use instead an additional war which uses a BrokerFactory to start the broker explicitly? In this case how to make sure that this war executes first in Tomcat ? And how do I stop my broker and where?
Thanks for the help.
from the docs...
If you are using the VM transport and wish to explicitly configure an
Embedded Broker there is a chance that you could create the JMS
connections first before the broker starts up. Currently ActiveMQ will
auto-create a broker if you use the VM transport and there is not one
already configured. (In 5.2 it is possible to use the waitForStart and
create=false options for the connection uri)
So to work around this if you are using Spring you may wish to use the
depends-on attribute so that your JMS ConnectionFactory depends on the
embedded broker to avoid this happening. e.g.
see these pages for more information...
http://activemq.apache.org/vm-transport-reference.html
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
http://activemq.apache.org/how-do-i-restart-embedded-broker.html