Multiuser system - system

Hello i have a question,
i write a requirement specification.
there is a system, where more than one person interacts with,
and i want to describe what happens when one person quits the interaction.
I don't know how exactly a mulituser systems works,
i guess the system creates an instance (?), and when the user finished it get closed again?
but the system as such still runs (for sure).
how can i discribe that ... issue correctly? ( does it work like i guessed?)
thanks in davance

I'd say that the system listen on two different things :
a message (interaction) from one of the connected users
a login message from a new user
This behavior is parametrized by the number of connected users. In the limit case where all connected users disconnect, the system does not wait anything from connected users, but keep listening on new connections.

Related

Daphne processes run by supervisor don't share memory

I have a Django app that uses Daphne behind Nginx as an app server.
I have first used Gunicorn + Uvicorn as an app server but I ran into a problem that I thought I could avoid by simply switching to Daphne. Unfortunately, when I switched to Daphne, the same problem remained.
Description of the problem:
I have an app that has users. I save active users in an array. It's all good while I have only one Gunicorn+Uvicorn worker or one Daphne+Supervisor process. Problem starts when I introduce multiple workers/processes. I didn't know that each worker/process has its own memory, so when I log in I am logged in on only one of the workers. For example, let's say that I have 4 workers. I log in. First response in app is okay but all the other are random and the more workers I have, the less chance there is that I'll get a right answer. That is, chances are equal to 1/n where n is a number of workers/processes. That's because logged in user is saved in an array on only one of those workers memory.
So, I need some tips how to solve that problem and make workers/processes share that array with logged in users.
I saw that there is a --preload flag in Gunicorn but it makes copy on write for workers, so if I understand it correctly, when I log in on the current worker, main process memory will copy itself only for that specific worker and I'll still be logged in only one instance of the app.
What I need is an array to which all processes have an access.
I am at the moment trying to make it work using multiprocessing module, but if someone has a nicer solution, I'd really appreciate it.
All advices are welcome! Thank you! :D

Can more than one application claim an interface in libusb?

I am working on a hardware/software application where, connected via usb, is a device that does some off board processing on some data. The application is meant to be open multiple times and which device needs which data is identified by an in-stream parameter. My question is, can more than one application claim an interface? my first implementation used WinUSB but I quickly realized that that limits me to only one instance. The libusb documentation claims that this limitation is removed in their driver.
My concern is, because I intend to have far more than 8 instances running, having only the 8 interfaces allotted will not be sufficient. If I cannot, in fact, claim an interface more than once, is there a method where I could have the applications call a shared library that claims the interface and manages and routes traffic between the applications?
As far as I know you can only have one handle open to a device in either implementation.
I think you are on track in terms of how to handle this problem. The way I have done something like this in the past is to create a service that is to run in the background. This service should be launched by the first instance of the application, and can keep a reference count of it's clients. On your next instance of the application increment your reference count, and whenever a client application closes decrement the reference count. When the last application closes the service can close too.
The service would have the job of opening the device and reading all data in to a buffer. From there you can either put smarts in to the service to process the data and load it in to different shared buffers that are each individually accessible by your other client application instances, or you could simply make one huge buffer available to everyone (but this is a riskier solution).

Grails test JMS messaging

I've got a JMS messaging system implemented with two queues. One is used as a standard queue second is an error queue.
This system was implemented to handle database concurrency in my application. Basically, there are users and users have assets. One user can interact with another user and as a result of this interaction their assets can change. One user can interact with single user at once, so they cannot start another interaction before the first one finishes. However, one user can be in interaction with other users multiple times [as long as they started the interaction].
What I did was: crated an "interaction registry" in redis, where I store the ID of users who begin an interaction. During interaction I gather all changes that should be made to the second user's assets, and after interaction is finished I send those changes to the queue [user who has started the interaction is saved within the original transaction]. After the interaction is finished I clear the ID from registry in redis.
Listener of my queue will receive a message with information about changes to the user that need to be done. Listener will get all objects which require a change from the database and update it. Listener will check before each update if there is an interaction started by the user being updated. If there is - listener will rollback the transaction and put the message back on the queue. However, if there's something else wrong, message will be put on to the error queue and will be retried several times before it is logged and marked as failed. Phew.
Now I'm at the point where I need to create a proper integration test, so that I make sure no future changes will screw this up.
Positive testing is easy, unfortunately I have to test scenarios, where during updates there's an OptimisticLockFailureException, my own UserInteractingException & some other exceptions [catch (Exception e) that is].
I can simulate my UserInteractingException by creating a payload with hundreds of objects to be updated by the listener and changing one of it in the test. Same thing with OptimisticLockFailureException. But I have no idea how to simulate something else [I can't even think of what could it be].
Also, this testing scenario based on a fluke [well, chance that presented scenario will not trigger an error is very low] is not something I like. I would like to have something more concrete.
Is there any other, good, way to test this scenarios?
Thanks
I did as I described in the original question and it seems to work fine.
Any delays I can test with camel.

WiFi communication to embedded display

I'm trying to create an embedded outdoor display of bus arrival times at my university. I'd like the device to utilize my school's secured WiFi network to show arrival time updates determined from a server script I have running.
I was hoping to get some advice on the high-level operation of this thing -- would it be better for the display board to poll a hosted database via the WiFi network or should I have a script try to communicate with the board directly over 802.11? (Push or Pull?)
I was planning to use a Wifly or WIZnet ethernet board in combination with a wireless access hub. Mostly inspired by this project: http://www.circuitcellar.com/Wiznet/winners/001166.html Would anyone recommend something else over one of the WIZnet boards? I saw SPI/UART options and thought these boards could work with an AVR platform.
And out of curiosity -- if you were to 'cold start' this device (ie, request a bus arrival time by pushing the display's on button) you might expect it to take 10-20 seconds to get assigned an IP and successfully connect to the database, does that sound right?
I'd go pull. In fact, I'd have outdoor display make http or https requests of the server. That way the server could tell it how long to show a given set of data before polling for a new one using standard http page expiration.
I think pull would make it easier to have multiple displays, and to test your server as well. I've also got a gut feeling that this would make your display more secure. Someone would have to hack your server to hijack your display.
There's a very cool looking Arduino-targetted product called the WiShield. Seems super easy to use and he provides some source code. It uses SPI for host communication. If you're not interested in going the Arduino route, I'm sure the source code wouldn't be too hard to port to something like avr-gcc. Check it out, might save you some time and headaches for $55. Worth checking out anyway.

Grid Computing and Logged out

Does grid computing continue when the user is not logged in - for instance, on an educational system, where students must log in, when the log out, does the cpu continue the cloud computing? Or in another instance, if I use my home computer for something like superdonate.com, does the processor still go if I log out?
It depends on the client and how it is set up. But I think most clients continue to work when you log off.
The whole purpose is to use the computer when it is idle after all.
Your question is very generic. Technically, if you have delegation of credential, yes. In Globus you delegate authentication credential to a third part, and it will continue acting on your behalf even if you "log out".