Backplane vs Sticky Load balancer - redis

I am developing SignalR application.There will be multiple instances of my application running on different servers behind the load balancer. I read about the backplane and found out that it is mainly serves the purpose of server failure and handles the request hops between multiple servers.(there might be another benefits).
Please consider below scenario and suggest if I still needs backplane.
I am using sticky load balancing(i.e. all subsequent request from client goes to same server) ? So there is no chance of request hops in good scenario.
How I handled server down scenario - When server goes down. Client tries to reconnect and gets "404-not found" error.At this time client start new connection and it works.

The main reason for having a backplane when developing SignalR application comes from the following scenario:
let's say you have 2 web servers hosting your application, serverA and serverB
you have 2 clients connecting to your application, client1 who is served by serverA and client2 who is served by serverB
A good assumption when developing a SignalR application is that you want these 2 clients to communicate with one another. So client1 sends a message to client2.
The moment client1 sends a message, his request is completed by server1. But server1 keeps a mapping of connected users in memory. It looks for client2, but client2 is kept in the memory of serverB, so the message will never get there.
By using a backplane, basically every message that comes in one server is broadcast to all other servers.
One solution is to forward messages between servers, using a component called a backplane. With a backplane enabled, each application instance sends messages to the backplane, and the backplane forwards them to the other application instances.
Taken from SignalR Introduction to scaleout
Be sure to check this backplane with Redis from the SignalR documentation.
Hope this helps. Best of luck!

Related

Websocket connection over Microsoft network load balancer (NLB)

I have an on premise NLB with 4 nodes, and on each of those nodes there is an app that communicates with the client over websockets using SignalR.
I have noticed through testing that at some point the client stops receiving messages over websocket, there are no errors, the socket just stops receiving messages.
My suspicion is that the client connects to node 1 through NLB, but when the node 1 has too much traffic the NLB switches the client to node 2, and since client isn't registered in the node 2 the messages stop coming and the client doesn't notice the change.
My questions are:
Am I correct to assume that in order to have default NLB configuration I would have to add a Redis backplane that will keep the list of all connections and allow each node to communicate with client regardless of which node was first to receive the connection? (Microsoft suggested solution https://learn.microsoft.com/en-us/aspnet/core/signalr/scale?view=aspnetcore-6.0)
Is there a way to change NLB configuration to allow specific apps to behave differently that others, for example, can I set up the NLB in such a way that the SignalR app so that all of the websocket traffic goes to a specific node, but the rest of the apps on the servers behave as they do with the default configuration?
If I were to move to the cloud, would I face the same issue or is this something that happens to Microsoft on premise NLB only, or is this something that can happen in Kubernetes or AWS as well? Does the cloud NLB use L7 and the on premise one L3?

how the server send message SSE worked in multiple server instance environments

I have a question on how to make SSE worked in multiple server environments.
In UI, there are two steps:
1. source = new EventSource('http://localhost:3000/stream');
source.addEventListener('open', function(e) {
$("#state").text("Connected")
}, false);
user in UI can post to api to update data
after user post to api, server is sending event to UI to udpate UI
In one server environement, this worked perfect fine, no problem at all.
But in multi server instance environments, this won't be working. For example, I have two server instance, and UI subscribed to server 1, then server 1 is remembering the connection, but data update is from server 2, when data is changed, there is no connection for SSE in server 2. Then in this senario, how can server 2 send SSE to UI?
In order to make SSE working in multiple server environments, do we need to adopt any saving solution to save the connection information so that any server instance can send SSE accurately to UI?
Let me clarify this more:
yes, both service 1 and service 2 are behind load balancer, they do not have to have same URL. UI is pure frontend end application, can even be mobile app. So, if UI is sending a eventSource request to LB of server1, then only this instance can use this connection to send event back to UI, right? But if we have multiple instance of server 1, that means any server 1 instance other than current one can NOT send event back to UI.
I believe this is the limitation of SSE unless the connection can be shared among all the instances. But how.
Thanks
If you have two servers, with different URLs, make one SSE connection (from each client) to each server.
Be aware of CORS restrictions, i.e. the same origin policy. (It works identically to xhr2 CORS, so fairly easy to google; my book also covers it in detail, chapter 9.)
If you have two servers behind a load balancer, which is presenting a single URL to the clients, then you just have to make sure the load balancer is configured correctly. I.e. to always pass through that socket to the correct server. If a back-end server dies, and needs replacing, the load balancer should close the SSE socket; the client will then auto-reconnect, and get a new back-end server.
The multiple servers behind a load balancer, should either be having their own data push socket connections to a master data source, or should all be polling the master data source.

Best Practice for setting up RabbitMQ cluster in production with NServiceBus

Currently we have 2 load balanced web servers. We are just starting to expose some functionality over NSB. If I create two "app" servers would I create a cluster between all 4 servers? Or should I create 2 clusters?
i.e.
Cluster1: Web Server A, App Server A
Cluster2: Web Server B, App Server B
Seems like if it is one cluster, how do I keep a published message from being handled by the same logical subscriber more than once if that subscriber is deployed to both app server A and B?
Is the only reason I would put RabbitMQ on the web servers for message durability (assuming I didn't have any of the app services running on the web server as well)? In that case my assumption is that I am then using the cluster mirroring to get the message to the app server. Is this correct?
Endpoints vs Servers
NServiceBus uses the concept of endpoints. An endpoint is related to a queue on which it receives messages. If this endpoint is scaled out for either high availability or performance then you still have one queue (with RabbitMQ). So if you would have an instance running on server A and B they both (with RabbitMQ) get their messages from the same queue.
I wouldn't think in app servers but think in endpoints and their non functional requirements in regards to deployment, availability and performance.
Availability vs Performance vs Deployment
It is not required to host all endpoints on server A and B. You can also run service X and Y on server A and services U and V on server B. You then scale out for performance but not for availability but availability is already less of an issue because of the async nature of messaging. This can make deployment easier.
Pubsub vs Request Response
If the same logical endpoint has multiple instances deployed then it should not matter which instance processes an event. If it is then it probably isn't pub sub but async request / response. This is handled by NServiceBus by creating a queue for each instance (with RabbitMQ) on where the response can be received if that response requires affinity to requesting instance.
Topology
You have:
Load balanced web farm cluster
Load balanced RabbitMQ cluster
NServiceBus Endpoints
High available multiple instances on different machines
Spreading endpoints on various machines ( could even be a machine per endpoint)
A combination of both
Infrastructure
You could choose to run the RabbitMQ cluster on the same infrastructure as your web farm or do it separate. It depends on your requirements and available resources. If the web farm and rabbit cluster are separate then you can more easily scale out independently.

SignalR Hub Acts as TCP Client

Here is my situation:
I have a windows service that constantly runs and receives data from connected hardware on TCP/IP port. This data needs to be pushed to Asp.net website for real-time display as well as stored in database. The windows service, asp.net website and database are all located on the same server.
For Widnows Service: I can't use WCF because it only supports netTCP which is not going to work with socket communication over TCP. So I have to use TCP socket communication for TCP Server/Client.
For Real-Time Updates to the website: I am planning to use SignalR library. I can create a hub which should send new data to clients whevener it becomes available.
My problem is: What's the best way for signalR hub to retrieve data from TCP Server/Client located in Windows service? One simple solution is to first store data in database and retrieve from there. But I am not sure if that is going to slow down the whole system as data received is going to be every second.
Please advise a best solution for this problem.
Thanks.

wcf and duplex communication

I have a lot of client programs and one service.
This Client programs communicate with the server with http channel with WCF.
The clients have dynamic IP.
They are online 24h/day.
I need the following:
The server should notify all the clients in 3 min interval. If the client is new (started in the moment), is should notify it immediately.
But because the clients have dynamic IP and they are working 24h/day and sometimes the connection is unstable, is it good idea to use wcf duplex?
What happens when the connection goes down? Will it automatically recover?
Is is good idea to use remote MSMQ for this type of notification ?
Regards,
WCF duplex is very resource hungry and per rule of thumb you should not use more than 10. There is a lot of overhead involved with duplex channels. Also there is not auto-recover.
If you know the interval of 3 minutes and you want the client to get information when it starts why not let the client poll the information from the server?
When the connection goes down the callback will throw an exception and the channel will close.
I am not sure MSMQ will work for you unless each client will create an MSMQ queue for you and you push messages to each one of them. Again with an unreliable connection it will not help. I don't think you can "push" the data if you loose the connection to a client, client goes off-line or changes an IP without notifying your system.