Not sure if this is possible but here it goes. I have a simple server set up where multiple clients could execute a program. Each time a client executes the script to start the program, a new instance of the program starts. Now when the client stops the program execution thru another script, the instance of the program is killed. The problem is if another client is on the server at the same time running the program that instance will be killed also. Is there any way to connect a particular instance to a particular client?
Here is more detail.
The server is used to stream media from the internet. I have streaming devices attached to tvs. When a particular channel is selected, it sends a signal to the server which in turn runs several scripts, one being a script to start a video conversion process thru program called ffmpeg. The ffmpeg coverts the stream, saves it to a folder on the server making it available to the streaming device/tv. Each time a user starts a channel, an instance of the ffmpeg starts because its converting a different stream. Once the user ends viewing, the device sends a signal back to the server thru php script, which in turn runs a script called cleanup. The cleanup script is a bat file that kills the ffmpeg and deletes the files that are no longer needed. All works great except if one individual elects to stop viewing while the other continues. I don't know how to tell the difference between each instance of ffmpeg. I don't want it to kill all instances just the one connected to the one particular stream that needs to end. I do have the capability of obtaining each device ip address when the user first selects the channel. Is there anyway to link the ip to the particular instance?
Actually took advice from above and renamed each instance as ip, stored ip in temp file, was then able to match it to the ip making the cancel call.
Related
Ok, kinda weird question but where are processes called from ssh by a remote user actually run? I had thought it was on the server itself but I was testing some code on my home network and found that running a particular script directly on the server (raspberry pi 3) takes about twice as long (60.36 s) as when I run the same process when I ssh into the server (31.7 s) from a MacBook pro.
The script simply runs a large for loop that performs a basic arithmetic operation and prints the result to the console. Additionally the script runs about 10X faster if the print statement is left out and the runtime for both local and remote become about the same.
Any ideas what is going on here? If anything I figured it might be the other way around and maybe packets were bottle necking each iteration, but now it seems more dependent on where the script is initiated and the respective processing power.
Can someone of you help me, how to make the following service selected in the image get into wait mode after starting the server.
Please let me know if developer trace is required to be posted for resolving this issue.
that particular process is a BATCH process, a process that runs scheduled background tasks (maintained by transaction SM36/SM37). If the process is busy right after starting the server, that means there were scheduled tasks with status released waiting for execution, and as soon as the server was up, it started those tasks.
If you want to make sure the system doesn't immediately start released background tasks, you'll have to set the status back to scheduled (which, thanks to a bit of weird translation, means they won't be executed because they are not released).
if you want to start the server without having a chance to first change the job status in SM37, you would either have to reset the status on database level (likely not officially supported by SAP) or first start the server without any BATCH processes (which would give you a number of great big warning messages upon login) and change the job status before then restarting the server with the BATCH processes. You can set the number of processes for each type in the profile of your instance (parameter rdisp/wp_no_btc).
I have a LAMP stack setup on Digital Ocean (Ubunu 12.04) that is pretty stable. The only time we have had a crash is when we sent out a mass email to about 30,000 people. We are not using the server to send the message, but a third-party email service (iContact). I watch the server with Top and can see it fill up with apache entries (each taking about 20MB) for a short while then drop back down after the mail has finished being sent.
I have successfully adjusted the apache settings to no longer crash - it just slows down for a bit. These are not hits to the pages, but something is making apache ramp up and spin off a ton of workers during the email send process.
My question is, where do I look to get some idea of what is happening? Unfortunately iContact has been no help and the log files I've looked at aren't telling me much, so I think I'm likely looking in the wrong place.
I used to send emails to over 200,000 people directly from a single machine. Trying to do it from a webpage is pretty crazy, so I wrote a command-line based script to first write it into a database, and then send ~50 at a time from the database, deleting as it went.
With Symfony/Swiftmailer it is pretty easy these days - the sending part is just a shell script that keeps running 'app/console swiftmailer:spool:send', sleeping and restarting till the database is empty.
I was reading about Processes. I wan't to know what really happens. My situation :
"I opened an Application. That creates a process say process1. I have other applications interfaced with this one and all these open up when i click a button inside my running application. I want to know Does my process1 create new processes and IPC happens OR processes for all the linked applications are created at once and then IPC happens?"
Obviously,a running application is a bunch of processes,or maybe a single process which has internally multiple threads acting within these processes.
So,your activity decides the creation and deletion of processes.say,if you are running an application such as media player and you suddenly start searching related info about the album---so here,totally a new process is created which helps interaction through web and after returning the output,it may die,may not,but the process was created on your request.Also,mostly ipc happens within processes,exactly as per your thinking,but shared memory communication is also one of the option,which is complicated and is less common.
One more thing to point out is that there are several 'daemon processes' which are running in the background and don't die before shutdown instruction!So,these processes are also sometimes related to the running application and serves its request.But,mostly,newer processes are created when we switch our task or perform certain action in the application.
I want to open an ephemeral port, use it for a bit, close it --- but not allow any other processes to grab it --- and then open it again. This sequence of events allows me to test one network endpoint going down, and coming back up. In real life the port number would not be ephemeral.
I want to do this to test my network connectivity code. I'm running a large number of tests in parallel, and they use ephemeral ports to avoid conflicting with each other (and I'd rather not have a central directory of ports). My test is occasionally failing because it is unable to reacquire the same port, presumably because another test has grabbed it.
Just remember what it was and explicitly bind to it the second time. But you can't prevent anyone else grabbing it in between. Be quick ;-)