Parse LiveQuery + Redis Scalability - redis

I want to use Live Query on a separate server on Herku. I am looking at the Redis add on and the number of connections. Can someone explain how the number of connection pertains to the how many users can subscribe to the live query.
Actual use case would be to announce to users who is active online in the app. The add ons run $200 per month to support 1024 connections. That sounds expensive, I don't understand if that means that 1024 users subscribing to a class? or if there is some kind of sharing going on between the 1024 connections and the number of users.
Lastly, what would happen if I exceed the connection limit? Would it just timeout with a parse timeout error?
Thanks

The redis connections will only be used to connect your parse server's together with the liveQuery servers. Usually you would have them on the same instance, listening to the same port. So let's say you have 10 dynos, you need 20 connections; 1 per publisher (parse-server) + 1 per subscriber , liveQuery server.
To calculate how many users can be connected on a single dyno, it's another story in itself, but you can probably have a look into other websocket + nodejs + heroku literature available on the internet. It's unlikely you'll need 1024 connections, unless you plan having as many dynos.

Related

Winsock2 receiving data from multiple clients without iterating over each client

I use winsock2 with C++ to connect thousands of clients to my server using the TCP protocol.
The clients will rarely send packets to the server.... there is about 1 minute between 2 packets from a client.
Right now I iterate over each client with non-blocking sockets to check if the client has sent anything to the server.
But I think a much better design of the server would be to wait until it receives a packet from the thousands of clients and then ask for the client's socket. That would be better because the server wouldn't have to iterate over each client, where 99.99% of the time nothing happens. And with a million clients, a complete iteration might take seconds....
Is there a way to receive data from each client without iterating over each client's socket?
What you want is to use IO completion ports, I think. See https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports. Microsoft even has an example at GitHub for this: https://github.com/microsoft/Windows-classic-samples/blob/main/Samples/Win7Samples/netds/winsock/iocp/server/IocpServer.Cpp
BTW, do not expect to server million connections on single machine. Generally, there are limits, like available memory, both user space and kernel space, handle limits, etc. If you are careful, you can probably serve tens of thousands of connection per process.

In-memory Database that Sync with a Central Database

Scenario: Hundreds of nodes running an HTTP server responding to time-critical requests (requests must be processed and a response must be sent back within milliseconds eg. 50 milliseconds max). Each server will serve about 500 requests per second (making all nodes collectively serving more than 100,000 qps). In order to avoid connecting to a central (remote) database for each request, each node will have an in-memory replica of the database (database should be able to only hold a few hundred megabytes of data).
Question 1: Are there any database technologies that implement this kind of multiple (hundreds of) in-memory replicas that synchronize in realtime (or near-realtime) with a central database?
Question 2: Are there any architectural patterns that address this scenario?
Oracle's GoldenGate is a potential solution. I can't speak to the systems you tagged: redis and couchdb. I have heard positive things about redis, but it's purely anecdotal; I don't have any direct, hands-on, experience.
My own company's eXtremeDB can also accommodate this type of workload. There are a couple of configurations to choose from, but this isn't an appropriate venue to explore them with you. Please reach out to us if you'd like to get into it.

Persistent connections to 100K of devices

Server needs to push data to 100K of clients which cannot be connected directly since the machine are inside private network. Currently thinking of using Rabbitmq, Each client subscribed to separate queue, when server has data to be pushed to the client, it publish the data to the corresponding queue. Is there any issues with the above approach? Number of clients may go upto 100K. Through spike, i expecting the memory size to be of 20GB for maintaining the connection. We can still go ahead with this approach if the memory not increasing more than 30GB.
the question is too much generic.
I suggest to read this RabbitMQ - How many queues RabbitMQ can handle on a single server?
Then you should consider to use a cluster to scale the number of the queues

How many Azure instances to support 1000 connections

If I have a WCF service hosted in an Azure webrole, how many small machine instances would I need to spin up so that potentially 1000 clients could be connected at once?
Processing power is not the issue I'm concerned about, just the maximum number of active connections that Azure will allow me to have open at any given moment.
We have a service method that will take some time to complete (say 20-30 seconds) and we need to know roughly how many open connections Azure will allow us to have per small instance so we can ensure 1000 people man connect at once.
Thanks!
The limit #Jordan refers to is the number of IIS threads that can be active. Following #Jordan's link to here you will see that the IIS threads will get passed off to .Net threads while .Net is handling them.
Your .Net threads are effectively limited by the resources on the system, although 1000 might be OK. Better would be to pass the requests off to asynchronous handlers (if you can - I don't know what you are trying to do), which leaves only the maximum number of open TCP connections Windows Server 2008 R2 will allow, which should not be a problem for 1000 connections.
Existing answers mostly cover it, but a different type of answer is that Windows Azure doesn't care how many connections you have. Your question then becomes one about Windows and IIS/.NET/WCF or whatever technology you choose to use.
Looks like for .NET 3.5 it was 12, and in .NET 4.0 it's 5,000. Not sure how they decided on those numbers.
source: http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/a6a4213b-b402-4a6c-940c-10937e34d9b5/
There is no limit with Azure Webrole - the only limits are whatever you've purchased - things like CPU, RAM, bandwith.

Is there a limit with the number of SSL connections?

Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)
Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".
SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.