We are working on a signalR client that needs to connect to a local Hub to which other local users connect and a cloud Hub. The connection to the local Hub will receive messages from the local users and after applying some logic will retransmit the message to the cloud Hub clients. What would be the right way to implement this functionality ? Thanks.
Related
I have an on premise NLB with 4 nodes, and on each of those nodes there is an app that communicates with the client over websockets using SignalR.
I have noticed through testing that at some point the client stops receiving messages over websocket, there are no errors, the socket just stops receiving messages.
My suspicion is that the client connects to node 1 through NLB, but when the node 1 has too much traffic the NLB switches the client to node 2, and since client isn't registered in the node 2 the messages stop coming and the client doesn't notice the change.
My questions are:
Am I correct to assume that in order to have default NLB configuration I would have to add a Redis backplane that will keep the list of all connections and allow each node to communicate with client regardless of which node was first to receive the connection? (Microsoft suggested solution https://learn.microsoft.com/en-us/aspnet/core/signalr/scale?view=aspnetcore-6.0)
Is there a way to change NLB configuration to allow specific apps to behave differently that others, for example, can I set up the NLB in such a way that the SignalR app so that all of the websocket traffic goes to a specific node, but the rest of the apps on the servers behave as they do with the default configuration?
If I were to move to the cloud, would I face the same issue or is this something that happens to Microsoft on premise NLB only, or is this something that can happen in Kubernetes or AWS as well? Does the cloud NLB use L7 and the on premise one L3?
I'm tryng to develop an high-frequency message dispatching application and i'm observing for the behavior of the SDK about message reading from the ModuleClient connected to the edgeHub by using "MQTT on TCP Only" transport settings.
Seems that there is no way to read multiple messages at time (batch) from the edgeHub (I think is something related to the underlying protocol).
So the result is that one must sequentially read a message, process it and give the ack to the hub.
Does exist a way to process multiple message at time without waiting for the processing of the previous?
Is this "limitation" tied to the underlyng protocol?
I'm using Microsoft.Azure.Devices.Client 1.37.2 on a .NET Core 3.1 application deployed on Azure Kubernetes (AKS) by using Azure IoT Edge on Kubernetes workload.
You are correct, you cannot use batch in MQTT protocol. This is a limitation tied to IoTHub when using MQTT Protocol.
IoT Hub only supports batch send over AMQP and HTTPS at the moment.
The MQTT implementation loops over the batch and sends each message
individually.
Ref: https://github.com/Azure/azure-iot-sdk-csharp
Suggest that you add a new feature request, if need IoTHub to support batch when connecting using MQTT: https://feedback.azure.com/forums/321918-azure-iot-hub-dps-sdks
Using the azure-iot-sdk for python I have a program that opens a connection to the IoT Hub and continually listens for direct methods, using the MQTT protocol. This is working as expected. I have a second python program that I invoke from cron hourly, that connects to the IoT Hub and updates the device twin for my device. Again this is using MQTT. Everything is working fine.
However I've come across in the documentation that a device can only have one MQTT connection at a time and the second will drop cause the first to drop. I'm not seeing this, however is what I'm doing unsupported?
Should I have a single program doing both tasks and sharing a single connection?
Yes that is correct, you can't have more than one connection with the same device ID to the IoTHub. Eventually in time you will have inconsistency behaviors and that scenario is unsupported. You should use a single program with a unique device ID doing both tasks.
Depending on the scenario you may want to consider using an iothubowner connection string to do service side operations like manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
I am developing SignalR application.There will be multiple instances of my application running on different servers behind the load balancer. I read about the backplane and found out that it is mainly serves the purpose of server failure and handles the request hops between multiple servers.(there might be another benefits).
Please consider below scenario and suggest if I still needs backplane.
I am using sticky load balancing(i.e. all subsequent request from client goes to same server) ? So there is no chance of request hops in good scenario.
How I handled server down scenario - When server goes down. Client tries to reconnect and gets "404-not found" error.At this time client start new connection and it works.
The main reason for having a backplane when developing SignalR application comes from the following scenario:
let's say you have 2 web servers hosting your application, serverA and serverB
you have 2 clients connecting to your application, client1 who is served by serverA and client2 who is served by serverB
A good assumption when developing a SignalR application is that you want these 2 clients to communicate with one another. So client1 sends a message to client2.
The moment client1 sends a message, his request is completed by server1. But server1 keeps a mapping of connected users in memory. It looks for client2, but client2 is kept in the memory of serverB, so the message will never get there.
By using a backplane, basically every message that comes in one server is broadcast to all other servers.
One solution is to forward messages between servers, using a component called a backplane. With a backplane enabled, each application instance sends messages to the backplane, and the backplane forwards them to the other application instances.
Taken from SignalR Introduction to scaleout
Be sure to check this backplane with Redis from the SignalR documentation.
Hope this helps. Best of luck!
Here is my situation:
I have a windows service that constantly runs and receives data from connected hardware on TCP/IP port. This data needs to be pushed to Asp.net website for real-time display as well as stored in database. The windows service, asp.net website and database are all located on the same server.
For Widnows Service: I can't use WCF because it only supports netTCP which is not going to work with socket communication over TCP. So I have to use TCP socket communication for TCP Server/Client.
For Real-Time Updates to the website: I am planning to use SignalR library. I can create a hub which should send new data to clients whevener it becomes available.
My problem is: What's the best way for signalR hub to retrieve data from TCP Server/Client located in Windows service? One simple solution is to first store data in database and retrieve from there. But I am not sure if that is going to slow down the whole system as data received is going to be every second.
Please advise a best solution for this problem.
Thanks.