Here's my scenario.
Client devices send upstream to CCS, which forwards them, as intended, to my app server (C libstrophe + stunnel), which acknowledges and treats them.
If i launch 2 instances of my app server (2 xmpp persistent channels), i see upstreams from the devices dispatched to server instance #1 OR server #2, never both, and apparently well balanced.
From a scalability stand point, it would be a nice feature to use in production, but as i found no validation of this behavior in the docs, i wonder if i can rely on it?
Thanks for your help.
Related
Currently, we plan to upgrade our product to use MQ(RabbitMQ or ActiveMQ) for message transfer between server and client. And now we are using a network lib(evpp) for doing so.
Because I don't use MQ before, so excpet for a lot of new features of MQ, I can't figure out the essential difference between them, and don't know exactly when and where should we use MQ or just use network library is fine.
And the purpose that we want to use MQ is that we want to solve the unreliability of communication, such as message loss or other problems caused by unstable network environment.
Hope there is someone familiar with both of them could release my confusion. Thanks for advance.
Message queuing systems (MQ, Qpid, RabbitMQ, Kafka, etc.) are higher-layer systems purpose-built for handling messages reliably and flexibly.
Network programming libraries/frameworks (ACE, asio, etc.) are helpful tools for building message queueing (and many other types of) systems.
Note that in the case of ACE, which encompasses much more than just networking, you can use a message queuing system like the above and drive it with a program that also uses ACE's classes for thread management, OS abstraction, event handling, etc.
Like in any network-programming, when a client sends a request to the server, the server responds with a response. But for this to happen the following conditions must be met
The server must be UP and running
The client should be able to make some sort of connection between them
The connection should not break while the server is sending the response to the client or vice-versa
But in case of a message queue, whatever the server wants to tell the client, the message is placed in a message-queue i.e., separate server/instance. The client listens to the message-queue and processes the message. On a positive acknowledgement from the client, the message is removed from the message queue. Obviously a connection has to made by the server to push a message to the message-queue instance. Even if the client is down, the message stays in the queue.
Hi I am new to Marklogic and Apache. I have been provided task to use apache as loadbalancer for our Marklogic cluster of 3 machines. Marklogic cluster is currently running on Linux servers.
How can we achieve this? Any information regarding this would be helpful.
You could use mod_proxy_balancer. How you configure it depends what MarkLogic client you would like to use. If you would like to use the Java Client API, please follow the second example here to allow apache to generate stickiness cookies. If you would like to use XCC, please configure it to use the ML-Server-generated or backend-generated "SessionID" cookie.
The difference here is that XCC uses sessions whereas the Java Client API builds on the REST API which is stateless, so there are no sessions. However, even in the Java Client API when you use multi-request transactions, that imposes state for the duration of that transaction so the load balancer needs a way to route requests during that transaction to the correct node in the MarkLogic cluster. The stickiness cookie will be resent by the Java Client API with every request that uses a Transaction so the load balancer can maintain that stickiness for requests related to that transaction.
As always, do some testing of your configuration to make sure you got it right. Properly configuring apache plugins is an advanced skill. Since you are new to apache, your best hope of ensuring you got it right is checking with an HTTP monitoring tool like WireShark to look at the HTTP traffic from your application to MarkLogic Server to make sure things are going to the correct node in the cluster as expected.
Note that even with the client APIs (Java, Node.js) its not always obvious or explicit at the language API layer what might cause a session to be created. Explicitly creating multi statement transactions definately will, but other operations may do so as well. If you are using the same connection for UI (browser) and API (REST or XCC) then the browser app is likely to be doing things that create session state.
The safest, but least flexable configuration is "TCP Session Affinity". If they are supported they will eliminate most concerns related to load balancing. Cookie Session Affinity relies on guarenteeing that the load balencer uses the correct cookie. Not all code is equal. I have had cases where it the load balancer didn't always use the cookie provided. Changing the configuration to "Load Balancer provided Cookie Affinity" fixed that.
None of this is needed if all your communications are stateless at the TCP layer, the HTTP layer and the app layer. The later cannot be inferred by the server.
Another conern is if your app or middle tier is co-resident with other apps or the same app connecting to the same load balancer and port. That can be difficult to make sure there are no 'crossed wires' . When ML gets a request it associates its identity with the client IP and port. Even without load balencers, most modern HTTP and TCP client libraries implement socket caching. A great perfomrance win, but a hidden source of subtle random severe errors if the library or app are sharing "cookie jars" (not uncomnon). A TCP and Cookie Jar cache used by different application contexts can end up sending state information from one unrelated app in the same process to another. Mostly this is in middle tier app servers that may simply pass on requests from the first tier without domain knowledge, presuming that relying on the low level TCP libraries to "do the right thing" ... They are doing the right thing -- for the use case the library programmers had in mind -- don't assume that your case is the one the library authors assumed. The symptoms tend to be very rare but catastrophic problems with transaction failures and possibly data corruption
and security problems (at an application layer) because the server cannot tell the difference between 2 connections from the same middle tier.
Sometimes a better strategy is to load balance between the first tier and the middle tier, and directly connect from the middle tier to MarkLogic.
Especially if caching is done at the load balancer. Its more common for caching to be useful between the middle tier and the client then the middle tier and the server. This is also more analogous to the classic 3 tier architecture used with RDBMS's .. where load balancing is between the client and business logic tiers not between business logic and database.
Let's assume I have a cluster of 2 worklight servers sharing the same WL runtime.
On that runtime, I've installed a application with a adapter that is a create event source function.
Just like this IBM article.
https://www.ibm.com/developerworks/community/blogs/worklight/entry/configuring_a_polling_event_source_to_send_push_notifications?lang=en
My question is, what will happen on a cluster environment.
Will repeated work ensue?
By other words, would my two WL Servers will be pooling for events?
Or perhaps that functionality is writing a task on the WL DB that the WL Servers poll regularly to check for work if no instance is taking care of it, so that only a server at a time would be "the event source"?
I'm working with IBM Worklight 6.2 and Websphere Liberty Profile 8.5.5
Thanks in advance!
Here's my attempt to answer this after some consultation:
My question is, what will happen on a cluster environment. Will
repeated work ensue? By other words, would my two WL Servers will be
pooling for events?
While the Worklight Servers share the same runtime, they are still considered as 2 instances. This means that each of them will attempt to perform a polling action. This is considered OK.
However, it is important to note that the backend system that is being polled should likely be smart enough to handle such a situation where 2 polling attempts are done for the same message.
If the backend doesn't know how to handle polling properly, the same message can be pulled more than once. This is true even of you have a single eventsource running. So this is something to keep in mind.
So I've read some articles about scaling Socket.IO. For various reasons I don't want to use built-in Socket.IO scaling mechanism (mostly it seems to be inefficient, since it publishes a lot more stuff to Redis then required from my point of view).
So I've came up with this simple idea:
Each Socket.IO server creates Redis pub/sub/store clients, connects to Redis and subscribes to a channel. Now, when I want to broadcast data I just publish it to Redis and all other Socket.IO servers get it and push it to users.
There is a problem, though (which I think is also a problem for Socket.IO built-in mechanism). Let's say I want to know the number of all connected users. There are at least two ways of doing that:
Server A publishes give_me_clients to Redis. Then each Socket.IO server counts connections and publishes number_of_clients. Server A grabs this data, combines it and sends it to the client.
Each server updates number_of_clients_for::ID_HERE in Redis whenever user connects/disconnects to the server. Then Server A just fetches data and combines it. Might be more efficient.
There are problems with these solutions though:
Server A is not aware of other servers. Therefore he does not know when he should stop listening to number_of_clients. One could fix it with making Server A aware of other servers: whenever a server connects to Redis he publishes new_server (Server A grabs the data and stores it in memory). But what to do, when Redis - Socket.IO connection breaks? Is there a way for Redis to notify clients that one of the client disconnected?
Actually the same as above. When a Socket.IO server crashes how to clear number_of_clients data?
So the real question is: can Redis notify (publish to chanel) clients that the connection with one of them has just ended??
After a lot of testing it seems, that Redis does not have such functionality. Also I've found out, that scaling Socket.IO is really a pain.
So I've switched from Socket.IO to WS (see this link). It is low level (but perfect for my use) and it only supports WebSockets (in all major versions). But then again I only want to support WebSockets and FlashSocket (which I have to imlement manually, but that's fine).
The advantage is that I can easily create cluster with such servers. HAProxy works with such servers almost out of the box (some minor tuning). Servers can easily communicate on a local net (with UDP or central TCP server if the cluster is big).
The disadvantage is that one have to manually implement some cool features like heartbeats, broadcasting, rooms, etc. Also you want have long-polling fallback, but that's fine in my case. Scaling is still more important, imho.
Why can't the application server send messages directly to the application? Why do you need the C2DM service in the middle?
To send a message from the server side you have two possibilities:
The client polls for new messages in certain intervals. Downside: Not a real-time solution. If you poll too frequently it will drain battery, consume your quota (if you don't have an unlimited package). Generally you do a lot of unnecessary work and traffic as most polls will return no messages.
Stay connected all the time. Downside: hard to deliver technically as phones can close connections when going to sleep mode. (At least nothing guarantees that they won't). Also you are running a background application 24/7.
The current state of C2DM will give you:
The ability to get messages even when your application is not running as Android will start your application (the part of it you configured, not necessarily the whole UI) when a message arrives.
A central, shared channel to deliver such messages. If 10 applications need real-time notifications on your phone this is one single facility, not 10 applications running and polling in parallel.
The promise: As this is the sanctioned API by Google for push messaging you can expect it to be optimized in the future. One improvement can be carrier-level messaging to initiate a C2DM session. That would mean you can put 100% of the "smart" part of your phone asleep.
Because the application can't (or isn't supposed to) act as a server.
If you would like to send messages to your app directly, then your application would need to have some sort of server listening in some port. This is bad because:
connections are usually firewalled, you cant just listen in some port,
your device can be turned off or without connectivity (then you app sever would need to retry),
the app server would need to know the address of your device,
app would need to be running (at least the server module) all the time, this isn't battery friendly.