Daphne processes run by supervisor don't share memory - python-multiprocessing

I have a Django app that uses Daphne behind Nginx as an app server.
I have first used Gunicorn + Uvicorn as an app server but I ran into a problem that I thought I could avoid by simply switching to Daphne. Unfortunately, when I switched to Daphne, the same problem remained.
Description of the problem:
I have an app that has users. I save active users in an array. It's all good while I have only one Gunicorn+Uvicorn worker or one Daphne+Supervisor process. Problem starts when I introduce multiple workers/processes. I didn't know that each worker/process has its own memory, so when I log in I am logged in on only one of the workers. For example, let's say that I have 4 workers. I log in. First response in app is okay but all the other are random and the more workers I have, the less chance there is that I'll get a right answer. That is, chances are equal to 1/n where n is a number of workers/processes. That's because logged in user is saved in an array on only one of those workers memory.
So, I need some tips how to solve that problem and make workers/processes share that array with logged in users.
I saw that there is a --preload flag in Gunicorn but it makes copy on write for workers, so if I understand it correctly, when I log in on the current worker, main process memory will copy itself only for that specific worker and I'll still be logged in only one instance of the app.
What I need is an array to which all processes have an access.
I am at the moment trying to make it work using multiprocessing module, but if someone has a nicer solution, I'd really appreciate it.
All advices are welcome! Thank you! :D

Related

How does performing processing server-side affect the overall performance of a site?

I'm working on an application that will process data submitted by the user, and compare with past logged data. I don't need to return or respond to the post straight away, just need to process it. This "processing" involves logging the response (in this case a score from 1 to 10) that's submitted by the user every day, then comparing it against the previous scores they submitted. Then if something found, do something (not sure yet, maybe email).
Though I'm worried about the effectiveness of doing this and how it could affect the site's performance. I'd like to keep it server side so the script for calculating isn't exposed. The site is only dealing with 500-1500 responses (users) per day, so it isn't a massive amount, but just interested to know if this route of processing will work. The server the site will be hosted on won't be anything special, probably a small(/est) AWS instance.
Also, will be using Node.js and SQL/PSQL database.
It depends on how do you implement this processing algorithm and how heavy on resources this algorithm is.
If your task is completely syncronous its obviously going to block any incoming requests for your application until its finished.
You can make this "processing-application" as a seperate node process and communicate with it only what you need.
If this is a heavy task and you worry about performance its a good idea to make it a seperate node process so it does not impact the serving of the users.
I recoment to google for "node js asynchronous" to better understand the subject.

Updating workflow - new functionality not being used

I'm busy playing around with various things, and am making changes a fair bit for educational purposes.
However, now, any changes I make are not being accepted and old behaviour is still happening. IN this case, I had a email watcher setup to write a file to our domain controller and send an SMS.
I changed it to do something different, but no number of stop and restarts help - it continues to do the first action.
Pointers welcome.
You can try to use the Stop All in the run now screen. This will stop all the workflow instances.
However, if the workflow is set to always on, it will pull up again automatically after a few minutes.
It is best if you disable always on, and set it back to always on.
Hope this helps

Philips Hue command limitation

First of all I'm developing my own C# library for controlling Philips Hue, which means I'm not using the official SDK. (I'm guessing that the SDK will make sure you won't have any problems)
I'm a little confused about the limitation in the Core concepts page in the API, which states:
We can’t send commands to the lights too fast. If you stick to around 10 commands per second to the /lights resource as maximum you should be fine. For /groups commands you should keep to a maximum of 1 per second.
I intend to respect this limitation, but does the limitation still apply when you are performing GET requests on the /lights resource, or is it only for sending actual commands with PUT requests to /lights/<id>/state that change the state of the light? Same question goes for the /groups resource.
Also is it even possible to damage anything by sending too many requests, or will it just take longer to get all responses?
Edit:
My overall question is: How should I understand the API limitation?
A more specific sub-question is: Should I wait 100 ms before sending another /lights command, relative to when I received a response, or relative to when I sent the previous command?
Another sub-question is: Should I consider this limitation only when using PUT requests on e.g. /lights/<id>/state, or on all request types GET/PUT/POST/DELETE
I don't know if anything was changed in firmware updates, but I have discovered that the bridge might not be so simple as you would think, and that the API description isn't very clear.
I've done a little testing while running firmware 01009914.
The bridge seems to have some kind of queue of incoming commands. I sent {"bri":254} to a group 9 times and 1 final command of {"bri":1}. From the first command to when the light is actually dimmed, takes roughly 3-4 seconds. Each time I sent a command the bridge replied almost instantly with success token.
I did the same small tests sending other commands, 10 of each JSON object:
{"bri":254} 3-4 seconds
{"on":true, "bri":254} 6-7 seconds
{"on":true, "bri":254, "alert":"none", "effect":"none"} 12-13 seconds
This actually shows that each change of attributes takes roughly 0.3 seconds for the bridge to handle.
I will claim that for each attribute we change, the bridge takes about 300 ms to finish, and the limitation of commands should be understood as: As long as you stick with changing one attribute of a group each second, you should be fine.
Note: I only tried with one group consisting of three lights, and I don't know if the bridge actually does have a queue of incoming commands, and in case it does have a queue, I don't know what the limit of items in it is.
Edit:
Now we have some official clarification of the Hue System Performance.
I'm fairly certain that the 10 commands per second is a guideline to prevent failure of the Bridge, and is a technical limitation of the hardware. Any more than that and you're apt to overload the bridge. I believe this applies to commands as well as requests.
Both approaches are reasonable. For laziness' sake, you could wait for 100ms to send a response, but I would only rely on this method if you don't plan on any other interactions with the Bridge.
I consider this limitation on all request types.
You won't damage anything if you send commands too fast. However, if you send commands too fast the bridge might become unresponsive and/or some messages can be ignored.
When it comes to the bridge, the way I think of it is that the bridge is more or less single threaded, so it works best if you make sure you don't send the next command before the previous one has returned.
In practice we've found that this works much better than waiting a fixed time between each request. In fact, you can pretty much send commands as fast as you want as long as you wait for the previous one to finish.
When you send a command to the bridge, the bridge has to then send it to the lamps through Zigbee. Since it's a mesh network in some cases the message has to make a couple of hops from lamp to lamp before it reaches the target. Depending on how many lamps you have and how many hops the signal needs to take, this can take a while. Also, it's possible that some messages randomly take much longer than others.
In general the system is not designed to handle very fast changes, but if you keep the above in mind you can make many cool effects :)

Long polling blocking multiple windows?

Long polling has solved 99% of my problems. There is now just one other problem. Imagine a penny auction site, where people bid. On the frontpage, there are several Auctions.
If the user opens three of these auctions, and because javascript is not multithreaded, how would you get the other pages to ever load? Won't they always get bogged down and not load because they are waiting for long polling to end? In practice, I've experienced this and I can't think of a way around it. Any ideas?
There are two ways that javascript gets around some of this.
While javascript is single threaded conceptually, it does its io in separate threads using completion handlers. This means other pieces of javascript can be running while you are waiting for your network request to complete.
Javascript for each page (or even each frame in each page) is isolated from Javascript on the other pages/frames. This means that each copy of javascript can be running in its own thread.
A bigger issue for you is likely to be that browsers often limit the number of concurrent connections to a given site, and it sounds like you want to make many concurrent connections to the same site. In this case you will get a lock up.
If you control both the sever and client, you will need to combined the multiple long-poll request from the client into a single long-poll request to the server.

Desing pattern for background working app

I have created a web-service app and i want to populate my view controllers according to the response i fetch(via GET) in main thread. But i want to create a scheduled timer which will go and control my server, if there becomes any difference(let's say if the count of an array has changed) i will create a local notification. As far as i read from here and some google results, i cant run my app in background more then ten minutes expect from some special situations(Audio, Vo-IP, GPS).. But i need to control the server at least one per minute.. Can anyone offer some idea-or link please?
EDIT
I will not sell the app in store, just for a local area network. Let's say, from the server i will send some text messages to the users and if a new message comes, the count of messages array will increment, in this situation i will create a notification. I need to keep this 'controlling' routing alive forever, whether in foreground or background. Does GCD give such a solution do anyone have any idea?
Just simply play a mute audio file in loop in the background, OR, ping the user's location in the background. Yes, that will drain the battery a bit, but it's a simple hack for in-home applications. Just remember to enable the background types in your Info.plist!
Note: "[...] I fetch (via GET) in main thread." This is not a good approach. You should never fetch any network resources on the main thread. Why? Because your GUI, which is maintained by the main thread, will become unresponsive whenever a fetch isn't instantaneous. Any lag spike on the network results in a less than desirable user experience.
Answer: Aside from the listed special situations, you can't run background apps. The way I see it:
Don't put the app in the background. (crappy solution)
Try putting another "entity" between the app and the "server". I don't know why you "need to control the server at least one per minute" but perhaps you can delegate this "control" to another process outside the device?
.
iOS app -> some form of proxy server -> server which requires
"babysitting" every minute.