How to open connection to redis from signalr hub - redis

I am using a redis server as my backplane for my signalr application.
I fully understand how to write and read from the redis server - but what is the best practice for connecting to the redis server from my hub class?
For example adding a reference to the server as below using stackexchange.redis, but where and when to call this in my hub class? Obviously I want to persist this connection. I'm totally stuck on this right now.
Private redis As StackExchange.Redis.ConnectionMultiplexer = ConnectionMultiplexer.Connect("127.0.0.1:6379")

Related

Socket IO Multiple Node with Cluster Redis Adapter, is possible?

Yesterday, i try to make realtime app with socket io with multiple node, in reference using redis adapter, redis adapter define ip address and port,
in multiple node, app separate with nginx loadbalance, one redis adapter.. it is running well..
my question is, if hundred client hit app, and app hit one redis adapter, i think this can make server slowly.. can implement loadbalance/cluster in redis adapter too ???
i try to find this answer, i found using ioredis, but im not sure...

Scoped vs Singleton ignite client node in .net web api

I am try planning to introduce Apache Ignite into existing old .net web api project to use it as key/value store for detecting duplicate requests sent to the load balanced api.
I would like to introduce minimum overhead to each request.
As I understand Client node is communicating to the server through TCP.
My current go to plan is to create a singleton object which will establish connection to the remote cache and register it in my DI container.
Is it ok to leave node running and TCP connection open or should make the ignite object scoped to start close on each request/response cycle?
Keep in open, as a singleton.
Ignite object is thread safe
It is expensive to create and connect to the cluster (in case of classic "Thick" client)
There is also a "Thin" client, which is very lightweight and can be created and disposed often. Note that thin client is also thread safe.
Also, you can try to use REST:
https://apacheignite.readme.io/docs/rest-api

Failover and client timeout

I am using ServiceStack 5.0.2 with Redis Sentinel (3 + 3) and having issues in case of a failover: commands being issued during or after a failover fail with timeout.
I have come up with an idea to implement retry pattern via custom IRedisClient. But probably there is a better strategy to employ in this case.
Answer given in the post How does ServiceStack PooledRedisClientManager failover work? does not seem to be the right way to go.
Thank you,
Redis Clients wrap a TCP connection with a Redis Server, a Redis Client that was connected with the instance that failed over will fail, but any new Redis Clients retrieved from the pool after failover will be connected to the new failed over instance.

Backplane vs Sticky Load balancer

I am developing SignalR application.There will be multiple instances of my application running on different servers behind the load balancer. I read about the backplane and found out that it is mainly serves the purpose of server failure and handles the request hops between multiple servers.(there might be another benefits).
Please consider below scenario and suggest if I still needs backplane.
I am using sticky load balancing(i.e. all subsequent request from client goes to same server) ? So there is no chance of request hops in good scenario.
How I handled server down scenario - When server goes down. Client tries to reconnect and gets "404-not found" error.At this time client start new connection and it works.
The main reason for having a backplane when developing SignalR application comes from the following scenario:
let's say you have 2 web servers hosting your application, serverA and serverB
you have 2 clients connecting to your application, client1 who is served by serverA and client2 who is served by serverB
A good assumption when developing a SignalR application is that you want these 2 clients to communicate with one another. So client1 sends a message to client2.
The moment client1 sends a message, his request is completed by server1. But server1 keeps a mapping of connected users in memory. It looks for client2, but client2 is kept in the memory of serverB, so the message will never get there.
By using a backplane, basically every message that comes in one server is broadcast to all other servers.
One solution is to forward messages between servers, using a component called a backplane. With a backplane enabled, each application instance sends messages to the backplane, and the backplane forwards them to the other application instances.
Taken from SignalR Introduction to scaleout
Be sure to check this backplane with Redis from the SignalR documentation.
Hope this helps. Best of luck!

SignalR Hub Acts as TCP Client

Here is my situation:
I have a windows service that constantly runs and receives data from connected hardware on TCP/IP port. This data needs to be pushed to Asp.net website for real-time display as well as stored in database. The windows service, asp.net website and database are all located on the same server.
For Widnows Service: I can't use WCF because it only supports netTCP which is not going to work with socket communication over TCP. So I have to use TCP socket communication for TCP Server/Client.
For Real-Time Updates to the website: I am planning to use SignalR library. I can create a hub which should send new data to clients whevener it becomes available.
My problem is: What's the best way for signalR hub to retrieve data from TCP Server/Client located in Windows service? One simple solution is to first store data in database and retrieve from there. But I am not sure if that is going to slow down the whole system as data received is going to be every second.
Please advise a best solution for this problem.
Thanks.