integrating redis into serverless - redis

I am looking at integrating a caching service with serverless.
I have decided to go with redis. However through reading through the npm redis, it seems that you are required to call client.quit() after completing the request.
The way serverless seems to work is that the instance is spawned, and then deleted when not in use. So I was wondering if there was a way to quit the redis connection when the serverless instance is being deleted.
Or whether I just have to actually just start a connection on every request, and quit the connection before each request finishes.
I was hoping I could do it on the app state, instead of request state, that way I wont have to spawn so many connections.

No. A connection could be reused. It does not need to start a new connection on every request.
If you use the redis.creatClient() to create a connection, you could use this connection always in your app. And it has reconnect mechanism if the connection is broken. So in your app development, you do not need to care the connection problem, just get a global connection and always use it.

Related

How to peroperly connect and disconnect redis client from cloud functions/cloud run services

I'm trying Redis for my project backend that rely on cloud functions with HTTP trigger and Express routing and I have connection leak because Redis client is not disconnected when the function finished. I'm going to use cloud run for microservices in the future so I think it might have same problem so I decided to ask first.
I tried process.on("exit", ()=> {}); but it won't work.

How to handle connection reset by client from tomcat 8

I have a simple webbapp deployed in tomcat8. But some HTTP requests requires access to DB with slow queries. Sometimes HTTP client made reset connection. In the same time i'd like to handle it in my webapp for cancel slow query (which result is no longer interesting).
The main question: "How to catch reset connection from client side in phase awaiting response from server". Is it possible? Interrupting thread - the best way for it because I can easly handle it.
When connection is broken from client side tomcat does not interrupts http-nio-X thread. Why? How to do it?

Redis connection settings for app "surviving" redis connectivity issues

I'm using azure redis cache for certain performance monitoring services. Basically when events like page loads, etc occur, I send a fire and forget command to redis to record the event. My goal is for my app to function fine whether or not it can contact the redis server. I'm looking for a best practice for this scenario. I would be OK with losing some events if necessary. I've been finding that even though I'm using fire and forget, the app staggers when the web server runs into high latency or connectivity issues with the server.
I'm using StackExchange.Redis. Any best practice configuration options/programming practices for this scenario?
The way I was implementing a singleton pattern on the connection turned out to be blocking requests. Once I fixed this my app behaves as I want (e.g. it still functions when redis connection dies).

How to maintain WCF services in production?

What is the proper way to push out WCF service updates? Right now we just use the file system publish method that deletes all existing files prior to publish. This has to be done at say 2am so we don't interrupt end users. However, what if we HAD to push an update out middle of the day?
Is this where wrapping ClientBase with timed retries comes in handy? Thus the client's call while we're deploying initially fails, but it will re-try and succeed a second or so later (in theory)? Thanks in advance.
so you can inspect the Client.open method, or any business-method the client calls from the Service. Here you can for example, build a request channel (implements IRequestChannel) and a channelFactory to create a simple channel to the service, and then try to establish a Connection. If the service is not reachable an exeption is thrown. You can repeat this Kind of "probing" the service in a while Loop or something else. At the end, the clinets Open() Method or the busines method will wait until the service is reachable, and then it will continue. If the service is reachable you will jump out of the behavior and continue in business-code.
So the key is to implement a ClientBehavior for the Client, then in the validate-method build a ChannelFactory with a RequestChannel and try to connect to the service. This connection is somehow tried in a while Loop. If the connection will be accepted at the e.g. 4th time, the loop will be ended, the behavior closed and the busines-code will continue.
If you have any further question, feel free to ask me.

Detect cluster node failure Jboss AS 7.1.1-Final

I have configured 2 node clusers in Jboss AS 7.1.1-Final. I am planning to use sticky sessions. Meanwhile I am also recording number of active online users in Infinispan cache with node IP from where that user session was created for reporting purpose.
I have taken care of scenarios for login/logout where I would clear our cache entries. Problem is if one of the server node goes down, I need to write clean up routine to clear such records of that node from cache too.
One of the option is to write a client and check at specific interval if server is alive otherwise trigger a clean up routine. This approach would work but I am looking for more cleaner approach if I could detect server node failure that gets notified to other live nodes then I could hit cleanup.
From console I know that it shows when server goes down or comes up. But what would be that listerner to listen to such events. Any thoughts?
If you just need to know when the node leaves within some server module (inside JBoss server) you can use the ViewChanged listener
You cannot get this information on clients connected via REST or memcached protocols - with HotRod protocol it is doable but pretty hackish, you'd have to override TransportFactory.updateServers (probably just extend TcpTransportFactory - see configuration property infinispan.client.hotrod.transport_factory)