About Worklight 6.2 Analytics.
https://www-01.ibm.com/support/knowledgecenter/api/content/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/t_setting_up_production_cluster.html
There are several JNDI properties to configure but It is not explained how to configure them in WAS ND and in which scope must be configured (if this has sense)
For example the worklight.properties are configured as application properties during the application installation.
How are configured the analytics JNDI properties on WAS?
And also in which scope should them be configured, this is also struggling me. For example it says that properties like "analytics/shards" or "analytics/replicas_per_shard" must be configured in the first node, but for me these properties should be properties configured at cluster level, not at node level.
Also WAS ND topology is something completely dynamic and flexible, what happens if I remove that "first" node?
Ok, now I understand that when in the Worklight Analytics documentation talk about cluster it is not talking about WAS Cluster but about Elasticsearch cluster.
Taking into account this, configuring a cluster for Analytics does not mean to install the analytics.war in a WAS cluster, it means that you will install analytics.war file in a number of WAS servers (not WAS clusters, not WAS nodes) and with the ElasticSearch properties you will configure the ElasticSearch cluster.
Is this correct?
The specific answer to my question is that the value of the properties are set during the detailed installation of the analytics.war file as it is done with the Application Project WAR file, worklightadmin.war or worklightconsole.war.
It is only needed to set those properties if you are configuring Analytics in more than one server.
Related
I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes
I want to create (2) broker clusters connected by network of brokers in JBoss Fuse 6.2; each cluster has 2 master/slave pairs.
It's a small cluster, so we don't intend to use Fabric/Zookeeper; everything will be statically configured, no auto discovery.
Questions
Is it possible to use fabric profiles to build the topology, but
avoid using fabric at runtime?
Can we use Git, or something similar, for centrally managing container config files, again, without fabric?
We tried creating profiles using fabric:mq-create, but the command is not available unless a fabric is first created, which defeats the purpose.
No fabric profiles requires using fabric. You can use git to store files, but you cannot have JBoss Fuse automatic use it such as it does with fabric. You would need to use git manually.
The AMQ broker in JBoss Fuse is just standard Apache ActiveMQ so you can configure it manually/static as a network of brokers. It just not very easy to do if you haven't done that before.
See the JBoss A-MQ documentation as that covers the broker: http://www.jboss.org/products/amq/overview/
for example at: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.2/html/Using_Networks_of_Brokers/index.html
I have a curiosity and I was searching for it without any result. In GlassFish documentation it is written:
If the GlassFish Server instance on which the application client is
deployed participates in a cluster, the GlassFish Server finds all
currently active IIOP endpoints in the cluster automatically. However,
a client should have at least two endpoints specified for
bootstrapping purposes, in case one of the endpoints has failed.
but I am asking myself how this list is created.
I've done some tests with a stand-alone client that is executed in a JVM and does some RMI calls on an application that is deployed in a GlassFish cluster and I can see from the logs that the IIOP endpoints list is completed automatically and it is set as com.sun.appserv.iiop.endpoints system property but if I stop a server instance or start another during the execution of the client the list remains the one that was created when the JVM was started.
GlassFish clustering is managed by the GMS (Group Management Service) which usually uses UDP Multicast, but can use TCP where that is not available.
See section 4 "Administering GlassFish Server Clusters" in the HA Administration Guide (PDF)
The Group Management Service (GMS) enables instances to participate in a cluster by
detecting changes in cluster membership and notifying instances of the changes. To
ensure that GMS can detect changes in cluster membership, a cluster's GMS settings
must be configured correctly.
I have my web application currently hosted on WebSphere Application Server 7.0.0.0.
Now i want to migrate my application to JBOSS EAP 6.2.0. GA.
Has anyone done it before? I need help on below issues.
I want to create following equivalent components in JBOSS.
1) Oracle data source
--> To create Oracle Data Source, we first definitely need to create Oracle JDBC Provider. So also need to know how to create equivalent to this in JBOSS.
2) Queue
3) Activation Specification
4) Shared library to contain configuration file and third party jars.
How to deploy applications on JBOSS knowldge would be an added advantage.
Yeah, have done some googling and found below links,
http://www.redhat.com/f/pdf/jboss/JBoss_WebSphereMigrationGuide.pdf
https://docs.jboss.org/author/display/AS72/How+do+I+migrate+my+application+from+WebSphere+to+AS+7
But the links doesnt have any practicle knowledge.
I have tried migrating from websphere to JBOSS\WildFly 10.
Not sure about the other versions of JBoss but wildfly 10 has a configuration xml which you can use to configure your server.
I configured the database, queues, queue factories, namespace bindings using this configuration xml which is available as part of the server installation itself.
The file is present in this location
YOUR_SERVER_INSTALLATION_HOME/opt/jboss/wildfly/standalone/configuration/standalone.xml
There are multiple configurations that are possible and you can customize them as per your need as well. You can refer the below documentation for customization.
https://docs.jboss.org/author/display/WFLY10/Subsystem+configuration
We create and publish a service through the web console (/publisher). This is environment is a 2 node WSO2 AM cluster with a loadbalancer on top of it (HAProxy).
When we invoke the service (SoapUI), via this load balancer, one request succeeds and the next one fails and so on.
IMHO: The cluster configuration should be correct. I can see the published service on both nodes if I start the /publisher app on each node.
axis2.xml:
- Hazelcast clustering is enabled
- Using multicast
master-datasources.xml
- pointing to Oracle database
api-manager/xml
- pointing to jdbc string in master-datasources.xml
Does anyone have some tips.
It seems that the Synapse Artifact is not deployed on two Gateways correctly. Can you go to /repository/deployment/server/synapse-config/default/api and see if the both node have an xml file for the published API. If you haven't enabled Deployment Synchroniser the artifact will only be created in one node.