using BOSH/similar technique for existing application/system - apache

We've an existing system which connects to the the back end via http (apache/ssl) and polls the server for new messages, needless to say we have scalability issues.
I'm researching on removing this polling and have come across BOSH/XMPP but I'm not sure how we should take the BOSH technique (using long lived http connection).
I've seen there are few libraries available but the entire thing seems bloaty since we do not need buddy lists etc and simply want to notify the clients of available messages.
The client is written in C/C++ and works across most OS so that is an important factor. The server is in Java.
does bosh result in huge number of httpd processes? since it has to keep all the clients connected, what would be the limit on that. we are also planning to move to 64 bit JVM/apache what would be the max limit of clients in that case.
any hints?

I would note that BOSH is separate from XMPP, so there's no "buddy lists" involved. XMPP-over-BOSH is what you're thinking of there.
Take a look at collecta.com and associated blog posts (probably by Jack Moffitt) about how they use BOSH (and also XMPP) to deliver real-time information to large numbers of users.
As for the scaling issues with Apache, I don't know — presumably each connection is using few resources, so you can increase the number of connections per Apache process. But you could also check out some of the connection manager technologies (like punjab) mentioned on the BOSH page above.

Related

ISO-8583 message processing(defining priority of messages)

I need to get an understanding of ISO-8583 message platform,lets say i want to perform a authorization of a card transaction,so in real time at a particular instance lets say i got 100000 requests from network(VISA/MASTERCARD) all for authorization,how do i define priority of there request and the response,can the connection pool handle it(in my case its HIKARI),how is it done banks/financial institutions for authorizing a request.Please provide me some insights on how to manage all these requests.Should i go for a MQ?
Tech used are:-spring boot,hibernate,spring-tcp-starter
Your question doesn't seem to be very well researched as there are a ton of switch platforms out there that due this today and many of their technology guides can be found on the web including for major vendors like ACI, FIS, AJB,.. etc if you look yard enough.
I have worked with several iso-interface specifications, commercial switches, and home grown platforms and it is actually pretty consistent in how they do the core realtime processing.
This information on prioritization is generally in each ISO-8583 message processing specification and is made explicitly clear in almost every specification I've ever read written by someone who is familar with ISO-8533 and not just making up their own variant or copying someone elses.
That said.. in general at a high level authorizations / financials (0100, 0200) requests always have high priority than force posts (0x20) messages.
Administrative messages in the 05xx and 06xx and 08xx sometimes also get bumped up above other advices.. but these are still advices and almost always auths/financials are always processed first as they A) Impact the customer B) have much tighter timers than any advice by usually more than double or more.
Most switches I have seen do it entirely in memory without going to MQ and or some other disk for core authorization process to manage these.. but not to say there is not some sort of home grown middle ware sometimes involved.. but non-realtime processes regularly use a MQ process to queue or disk queuing these up into processes not in-line of the approval for this Store-and-forward (SAF) processing.. but many of these still use memory only processing for the front of their queue.
It is important to also differentiate between 100000 requests and 100000 transactions.. the various exchanges both internal and external make a big difference in the number of actual requests/responses in flight at even given time.. a basic transaction can be accomplished in like two messages.. but some of the more complex ones can easily exceed 20 messages just for a pre-authorization or a completion component.
If you are dealing with largely batch transaction bursts.. I can see the challenge of queuing but almost every application I have seen has a max in flight for advices and requests separate of each other.. and sometimes even with different timers.. and the apps pumping the transactions almost always wait for the response back before sending more.. and this tends to work fine for just about everyone.. including big posting batches from retailers and card networks. So if your app doesn't have them.. you probably need to add them.
In fact your 100000 requests should be sorted by (Terminal ID and/or Merchant ID) + (timestamp/local timestamp) + (STAN and/or RRN).
Duplicated transaction requests expected to be rejected.
If you simulating multiple requests from single terminal (or host) with same test card details the increasing of STAN/RRN would be a case.
Please refer to previous answers about STAN and RRN ISO 8583 fields.
In ISO message, what's the use of stan and rrn ?

LDAP Server side sorting - really a good idea?

I'm toying with using server side sorting in my OpenLDAP server. However as I also get to write the client code I can see that all it buys me is in this case one line of sorting code at the client. And as the client is one of presently 4, soon to be 16 Tomcats, maybe hundreds if the usage balloons, sorting at the client actually makes more sense to me. I'm wondering whether SSS is really considered much of an idea. My search results in the case aren't larger, dozens rather than hundreds. Just wondering whether it might be more of a weapon than a tool.
In OpenLDAP it is bundled with VLV - Virtual List View, which I will need some day, so it is already installed: so it's really a programming question, not just a configuration question, hence SO not SF.
Server-side sorting is intended for use by clients that are unable or unwilling to sort results themselves; this might be useful in hand-held clients with limited memory and CPU mojo.
The advantages of server-side sorting include, but not limited to:
the server can enforce a time limit on the processing of the sorting
clients can specify an ordering rule for the server to use
professional-quality servers can be configured to reject requests with sort controls attached if the client connection is not secure
the server can enforce resource limits, for example, the aforementioned time limit, or administration limits
the server can enforce access restrictions on the attributes and on the sort request control itself; this may not be that effective if the client can retrieve the attributes anyway
the server may indicate it is too busy to perform the sort or simply unwilling to perform the sort
professional-quality servers can be configured to reject search requests for all clients except for clients with the necessary mojo (privilege, bind DN, IP address, or whatever)
The disadvantages include, but not limited to:
servers can be overwhelmed by sorting large result sets from multiple clients if the server software is unable to cap the number of sorts to process simultaneously
client-side APIs have to support the server-side sort request control and response
it might be easier to configure clients to sort by their own 'ordering rules'; although these can be added to professional-quality, extensible servers
To answer my own question, and not to detract from Terry's answer, use of the Virtual List View requires a Server Side Sort control.

ZMQ device queue does not load balance properly

I know that ZMQ offers all of the flexibility to do your own load-balancing. However I would expect the out-of-the-box broker, about 4 lines of code using the line
zmq_device (ZMQ_QUEUE, frontend, backend);
to load balance quite well as the documentation says it does load balance.
ZMQ_QUEUE creates a shared queue that collects requests from a set of clients, and distributes these fairly among a set of services. Requests are fair-queued from frontend connections and load-balanced between backend connections. Replies automatically return to the client that made the original request.
I have an army of back-end services and yet find that often my front-end clients have to wait several seconds for something that takes < 1/10 of a second in a 1:1 setting (there are same # of client and service machines). I suspect that ZMQ is not load-balancing properly out of the box - it's sending too many requests to the same service even though it doesn't have bandwidth, etc.
I think this is partly because the services are multithreaded in a way that lets them take up to 10 concurrent requests yet it slows down greatly at near the 10th request even though it can still accept them. Random distribution would be ideal. Is there an out-of-the-box way to do this or can it be done in a few lines of code, or do I have to write my own broker from scratch?
Fwiw issue was the workers were taking on work when they didn't have room for it, issue was not in ZMQ layer per se.

Exactly how many users can Support Blazeds Messeging service ? for much user support what we need to do(pooling)?

I designed one On line Trading Application, which uses blazeds & jetty,
in that i used AMF-LongPooling as channel, with following parameter,
Here is the problem is Each message is not reaching all the user,who are connected, messages are missing to few users (300 recieving out of 600)...
what we need to do to provided instant messages to all Online. ??
Please help me one?
Your question is too generic, it's not possible to give an answer because it depends on too many things: network, size of the messages, your system architecture etc. My suggestion is to invest heavily in reading BlazeDS developer guide and to turn the debug messages on (there is a lot of useful information displayed by BlazeDS). It would also help to study BlazeDS source code.
In case of AMF-longpolling the request is parked on the server and if too many requests are parked at a time, they will consume all available threads for the server. And the next client won't be able to connect.
In your case I am assuming the message size is not very big. And the solution can be one of the followings:
To increase the number of available threads. For that you can have multiple server instances and distribute your clients over them.
You can make use of LCDS.
You don't get that problem in LCDS as it makes use of NIO end points that don't block the thread. I have come to know that this thread restriction is not a problem with Servlet 3.0 and in that case you can support more clients with blazeds itself. You can check more about it HERE.

what are some good "load balancing issues" to know?

Hey there guys, I am a recent grad, and looking at a couple jobs I am applying for I see that I need to know things like runtime complexity (straight forward enough), caching (memcached!), and load balancing issues
 (no idea on this!!)
So, what kind of load balancing issues and solutions should I try to learn about, or at least be vaguely familiar with for .net or java jobs ?
Googling around gives me things like network load balancing, but wouldn't that usually not be adminstrated by a software developer?
One thing I can think of is session management. By default, whenever you get a session ID, that session ID points to some in-memory data on the server. However, when you use load-balacing, there are multiple servers. What happens when data is stored in the session on machine 1, but for the next request the user is redirected to machine 2? His session data would be lost.
So, you'll have to make sure that either the user gets back to the same machine for every concurrent request ('sticky connection') or you do not use in-proc session state, but out-of-proc session state, where session data is stored in, for example, a database.
There is a concept of load distribution where requests are sprayed across a number of servers (usually with session affinity). Here there is no feedback on how busy any particular server may be, we just rely on statistical sharing of the load. You could view the WebSphere Http plugin in WAS ND as doing this. It actually works pretty well even for substantial web sites
Load balancing tries to be cleverer than that. Where some feedback on the relative load of the servers determines where new requests go. (even then session affinity tends to be treated as higher priority than balancing load). The WebSphere On Demand Router that was originally delivered in XD does this. If you read this article you will see the kind of algorithms used.
You can achieve balancing with network spraying devices, they could consult "agents" running in the servers which give feedback to the sprayer to give a basis for decisions where request should go. Hence even this Hardware-based approach can have a Software element. See Dynamic Feedback Protocol
network combinatorics, max- flow min-cut theorems and their use