Execute CLIENT LIST command even if reaches maxconnection - redis

I have a issue with my Redis that is often reaching max number of connections, so I'm trying to investigate this problem. But I can't execute the CLIENT LIST command because I can't connect to Redis as it has reached the maximum number of connections.
Is there a way to force the CLIENT LIST command even max number of connections reached?

Related

How to check average number of requests per second hitting Redis

I have a single redis pod running in my k8s cluster, and I would like to get an idea of how many requests per second my redis server is currently handling in my production environment. I have tried redis-cli monotor, which prints out live requests on the console but I cannot seem to find a way to get a numerical measure that simply tells me something like "redis server is handling x request per second on average in the past 24 hours". Any pointers would be highly appreciated.

Does UniData only allow one command or query per connection at a time?

Does UniData (Rocket U2) only allow one query or command to run at a time per connection?
I know each connection has a process or two (udapi_server/slave/etc. I believe), and we pool connections via UniObjects connection pooling. I know there have been optimizations to the UniRPC service in recent releases allowing threaded connection acceptance, but my suspicion is that even with that only one query is executed at a time on each connection synchronously.
i.e. if you have a maximum of 10 pooled connections and 10 long running queries, nothing else is even starting until one process completes its query - even if they are all I/O bound and CPU idle.
In connection pooling the process using the connection needs to finish prior to another request being accepted and using the same CP process. Connection pooling is ideal for small requests. Large queries returning the entire result set at once will tie up the CP process, and cause other requests for a connection to queue up.
Ian provided one possible solution, and I expect there are many ways you could break up the request to smaller requests that would not tie up the connection pooling licenses for prolonged periods of time.

Parse LiveQuery + Redis Scalability

I want to use Live Query on a separate server on Herku. I am looking at the Redis add on and the number of connections. Can someone explain how the number of connection pertains to the how many users can subscribe to the live query.
Actual use case would be to announce to users who is active online in the app. The add ons run $200 per month to support 1024 connections. That sounds expensive, I don't understand if that means that 1024 users subscribing to a class? or if there is some kind of sharing going on between the 1024 connections and the number of users.
Lastly, what would happen if I exceed the connection limit? Would it just timeout with a parse timeout error?
Thanks
The redis connections will only be used to connect your parse server's together with the liveQuery servers. Usually you would have them on the same instance, listening to the same port. So let's say you have 10 dynos, you need 20 connections; 1 per publisher (parse-server) + 1 per subscriber , liveQuery server.
To calculate how many users can be connected on a single dyno, it's another story in itself, but you can probably have a look into other websocket + nodejs + heroku literature available on the internet. It's unlikely you'll need 1024 connections, unless you plan having as many dynos.

Is it possible to query the max number of channels available in an ssh connection?

I'm using ssh to connect to a server (really a cluster, but that's not important) and to transfer files to it. There seems to be a hard limit (9 in this case) of channels that can be used in one ssh connection.
Does the ssh protocol have a mechanism for querying the max number of channels available per ssh connection?
No, there is no way to query the maximum amount of channels available. The protocol is described in PROTOCOL.mux.
The server can limit the amount of multiplexed sessions (MaxSessions in sshd_config), but not the amount of the channels.

Is there a limit with the number of SSL connections?

Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)
Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".
SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.