Let's assume I have a cluster of 2 worklight servers sharing the same WL runtime.
On that runtime, I've installed a application with a adapter that is a create event source function.
Just like this IBM article.
https://www.ibm.com/developerworks/community/blogs/worklight/entry/configuring_a_polling_event_source_to_send_push_notifications?lang=en
My question is, what will happen on a cluster environment.
Will repeated work ensue?
By other words, would my two WL Servers will be pooling for events?
Or perhaps that functionality is writing a task on the WL DB that the WL Servers poll regularly to check for work if no instance is taking care of it, so that only a server at a time would be "the event source"?
I'm working with IBM Worklight 6.2 and Websphere Liberty Profile 8.5.5
Thanks in advance!
Here's my attempt to answer this after some consultation:
My question is, what will happen on a cluster environment. Will
repeated work ensue? By other words, would my two WL Servers will be
pooling for events?
While the Worklight Servers share the same runtime, they are still considered as 2 instances. This means that each of them will attempt to perform a polling action. This is considered OK.
However, it is important to note that the backend system that is being polled should likely be smart enough to handle such a situation where 2 polling attempts are done for the same message.
If the backend doesn't know how to handle polling properly, the same message can be pulled more than once. This is true even of you have a single eventsource running. So this is something to keep in mind.
Related
I feel like I am missing something very fundamental here.
I can bring up a RabbitMQ cluster with three nodes (rabbit1, rabbit2 and rabbit3) without an issue. Then when I start writing my microservices it seems like each client connects to only one rabbit instance. So let's say I have all my services connect to rabbit1.
If rabbit1 then goes down will my entire infrastructure blow up? Do the services have a way of switching to another rabbit node? It seems like they cannot, in which case, what is the point of having a cluster?
In case someone else runs into this and has trouble (like myself) finding this in the documentation, RabbitMQ does not manage client connection auto-recovery. From the docs:
Some client libraries provide a mechanism for automatic recovery from
network connection failures... Other clients may consider network
failure recovery to be a responsibility of the application.
So first check if you library offers auto-recovery, if not you'll have to implement it yourself.
A decade and more in the past I studied and used IBM's MQSeries and Websphere MQ software. It seemed to be a great solution for connecting two applications in different companies at different locations: the app at each company could drop off a message with MQSeries on the local machine, MQSeries would transport it to the machine in the other company, and the app on that side would pick up the message locally.
Fast forward to today: I no longer work for IBM, but I'm trying to solve a similar problem. My app needs to send a few messages a day, each a few MB or less, to an app at a remote company, and receive a similar number of somewhat smaller replies.
Is message queuing middleware still a good solution to this architectural need? I've been trying to prototype this with RabbitMQ, but the above seems to be an abnormal thing to do with RabbitMQ. Am I barking down the wrong rabbit hole?
sure it can be - if the remote company intends to provide the same service for others and not just you.
perhaps WMQ low latency is what you need, since there is no server required.
- WebSphere MQ Low Latency Messaging varies from conventional WebSphere products such as WebSphere MQ, WebSphere Message Broker, and WebSphere Application Server in that there is no installed and configured product infrastructure such as queue managers, message brokers, or application servers. Thus, there are no specific product components to be monitored, measured, and managed.
http://www-03.ibm.com/software/products/en/wmq-llm
http://pic.dhe.ibm.com/infocenter/wllm/v2r5/index.jsp?topic=%2Fcom.ibm.wllm.doc%2Fintroductiontowebspheremqlowlatencymessaging.html
Of course, a simple RESTful web interface might be able to provide the same functionality.
I do not recommend writing TCP socket applications - why do all that mid-weight lifting when there are so many products out there that will do the heavy and mid-weight lifting for you? You want to only do lightweight lifting - Send the request, get the response - 6 2 and even, over and out.
You need to draw up your list of requirements reagrding:
- reliability - how critical is it if a request or response gets lost?
- recoverability - can in-flight messages be recovered and resent if the application(s) crash?
- round trip time - one side of latency
- 1:1 service? many-to-one?
I hope this helps.
Dave
WebSphere MQ can still solve your problem... I think the scenario is point to point communication with Request-Response scenario. You can use some relatively new stuff like JMS which integrate well with your application.
But if you are very sure that its only 2 applications that will communicate with each other and there is no network issue that will crop up, you can go in for simple socket communication.
The other way to solve the problem is to share a common database between the 2 applications.
I need to run a component using Apache Camel (or Spring Integration) under WAS ND 8.0 cluster. They both run some threads on startup, and stop them on shutdown normally. No problem to supply WAS managed threadpool. But that threads must run on single cluster's node at the same time. Moreover it must be high-available i.e. switch to other node when active node falls.
Solution I found - is WAS Partitioning Facility. It requires additional Extended Deployment licenses. Is it the only way, or there is some way to implement this using Network Deployment license only?
Thanks in advance.
I think that there is not a feature that address this interesting requirement.
I can imagine a "trick":
A Timer EJB send a message on a queue (let's say 1 per minute)
Configure a Service Integration Bus (SIB) with High Availability and No Scalability, so the HA Manager ensure that only one messaging engine (ME) is alive.
Create a non-reliable queue for high performances and low resource consumption.
The Activation Spec should be configured to listen only local ME.
A MDB implement the following logic: when the message arrives, it check if the singleton thread is alive, otherwise it start the thread.
Does it make sense?
What is the best way to combine a single instance WCF service that uses ActiveMQ and runs within IIS/AppFabric?
Our Services need to support both HTTP transports and ActiveMQ (listening and sending messages). We've elected not to use MSMQ, and will use Spring.Net.NMS. The fundamental issue I have now is that ActiveMQ needs to connect to the queue(s) at startup and remain connected, but WAS is getting in the way with it's message-activation feature. If the service is not activated until a message arrives (HTTP/MSMQ, etc) then there is no trigger to have the connection to AMQ occur.
I know I can disable the recycling behavior, and I know I can do self-hosting with a Windows Service. But I want to take advantage of the monitoring and other features in AppFabric. I've already been down the route with IServiceBehavior and will use that for other nice things. But that interface is not called until a (non-AMQ) message arrives. So it won't work for this. What I was hoping for was something along the line of how ServletContextListeners work in Java, where you get both the start up and shutdown events. But it seems no such thing exists in WAS... it is driven only by messages arriving.
I've scoured every inch of web info for 3 days and the only thing I came across was to use a static class construction (C#) trick as the trigger. That's a hack, but i can live with it. It still leaves the issue of cleanly shutting down, which I can figure out later.
Anyone have a solid solution to this?
The direct WCF support for ActiveMQ that Ladislav mentions is still being supported. There just hasn't been an official release for the module in a while. However, you can still get the latest version of it from the 1.5.x branch or trunk and compile it yourself.
1.5.x branch for use with Apache.NMS 1.5.0:
https://svn.apache.org/repos/asf/activemq/activemq-dotnet/Apache.NMS.WCF/branches/1.5.x/
Check out instructions:
http://activemq.apache.org/nms/source.html
There was direct WCF support for ActiveMQ but I guess it is not developed anymore. Your problem actually is the IIS / WAS (provides hosting for non-http protocols) hosting architecture. Services in WAS are always activated when message arrives - there is no global startup. The reason for this is that WAS hosting expects that there is separate process (windows service) running the listener all the time and this process has adapter which calls WAS and uses message level activation. I guess you don't have such process for ActiveMQ and because of that you will have trouble to use ActiveMQ endpoint hosted in WAS. Developing such listener can be challenging task (example for UDP).
Creating custom listener can be probably avoided by using IIS 7.5 / AppFabric auto start feature. There is also not very well documented way to run the code when the application starts.
I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.