You can get underground processes by
ps ux
I am searching a way to find processes to which I have not touched for 30 minutes.
How can you find processes unused for an half hour?
Define "untouched" and "unused". You can find out lots of things using the f parameter on ps(1) in BSD-like systems, the -o on Solaris and Sys/V-like systems.
Update
Responding to the comment:
Well, you can do it. Consider, for example, something that does a periodic ps, and stores the CPU time used along with time. (Actually, you could do this better with a C program calling the appropriate system calls, but that's really an implementation detail.) Store sample time and PID, and watch for the PID's CPU time not having changed over the appropriate interval. This could even be implemented with an awk or perl program like
while true; do
ps _flags_
sleep 30
done | awk -f myprog | tail -f
so that every time awk gets a ps output, it mangles it, identifies candidates, and sends them out to show through tail -f.
But then you may well have daemon processes that don't get called often; it's not clear to me that CPU time alone is a good measure.
That's the point about defining what you really want to do: there's probably a way to do it, but I can't think of a combination of ps flags alone that will do it.
Related
I am working on a block with gnuradio. I have come across a strange performance improvement when i was printing out some huge data on to terminal and the performance degrades without giving a print statement on to terminal.
I infer that while printing out to terminal i am giving gnuradio block an extra processing time for the block to process. This is just my hunch and might not be the exact reason. Kindly correct if this is not correct.
So, is there a way to add a specific amount of processing delay within a block(as what i got during printing out data to terminal) in gnuradio.
Thanks in advance
First of all, the obvious: don't print large amounts of data to a terminal. Terminals aren't meant for that, and your CPU will pretty quickly be the limiting factor, as it tries to render all the text.
I infer that while printing out to terminal i am giving gnuradio block an extra processing time for the block to process. This is just my hunch and might not be the exact reason. Kindly correct if this is not correct.
Printing to terminal is an IO thing to do. That means that the program handling the data (typically, the linux kernel might be handling the PTY data, or it might be directly handed off to the process of your terminal emulator) will set a limit on how it accepts data from the program printing.
Your GNU Radio block's work function is simply blocked, because the resources you're trying to use are limited.
So, can i add a specific amount of processing delay within a block(as what i got during printing out data to terminal) in gnuradio.
Yes, but it doesn't help in any way here.
You're IO bound. Do something that is not printing to terminal – makes a lot of sense, because you can't read that fast, anyway.
It is well known that, the callgrind analysis tool of the valgrind suit, provides the possibility to start and stop the colection of data via command line instruction callgrind_control -i on or callgrind_control -i off. For instance, the following code will collect data only after the hour.
(sleep 3600; callgrind_control -i on) &
valgrind --tool=callgrind --instr-atstart=no ./myprog
Is there a similar option for the cachegrind tool? if so, how can I use it (I do not find anything in the documentation)? If no, how can I start collecting data after a certain amount of time with cachegrind?
As far as I know, there is no such function for Cachegrind.
However, Callgrind is an extension of Cachegrind, which means that you can use Cachegrind features on Callgrind.
For example:
valgrind --tool=callgrind --cache-sim=yes --branch-sim=yes ./myprog
Will measure your programs cache and branch performance as if you where using Cachegrind.
I'm doing some work on profiling the behavior of programs. One thing I would like to do is get the amount of time that a process has run on the CPU. I am accomplishing this by reading the sum_exec_runtime field in the Linux kernel's sched_entity data structure.
After testing this with some fairly simple programs which simply execute a loop and then exit, I am running into a peculiar issue, being that the program does not finish with the same runtime each time it is executed. Seeing as sum_exec_runtime is a value represented in nanoseconds, I would expect the value to differ within a few microseconds. However, I am seeing variations of several milliseconds.
My initial reaction was that this could be due to I/O waiting times, however it is my understanding that the process should give up the CPU while waiting for I/O. Furthermore, my test programs are simply executing loops, so there should be very little to no I/O.
I am seeking any advice on the following:
Is sum_exec_runtime not the actual time that a process has had control of the CPU?
Does the process not actually give up the CPU while waiting for I/O?
Are there other factors that could affect the actual runtime of a process (besides I/O)?
Keep in mind, I am only trying to find the actual time that the process spent executing on the CPU. I do not care about the total execution time including sleeping or waiting to run.
Edit: I also want to make clear that there are no branches in my test program aside from the loop, which simply loops for a constant number of iterations.
Thanks.
Your question is really broad, but you can incur context switches for various reasons. Calling most system calls involves at least one context switch. Page faults cause contexts switches. Exceeding your time slice causes a context switch.
sum_exec_runtime is equal to utime + stime from /proc/$PID/stat, but sum_exec_runtime is measured in nanoseconds. It sounds like you only care about utime which is the time your process has been scheduled in user mode. See proc(5) for more details.
You can look at nr_switches both voluntary and involuntary which are also part of sched_entity. That will probably account for most variation, but I would not expect successive runs to be identical. The exact time that you get for each run will be affected by all of the other processes running on the system.
You'll also be affected by the amount of file system cache used on your system and how many file system cache hits you get in successive runs if you are doing any IO at all.
To give a very concrete and obvious example of how other processes can affect the run time of the current process, think about if you are exceeding your physical RAM constraints. If your program asks for more RAM, then the kernel is going to spend more time swapping. That time swapping will be accounted in stime but will vary depending on how much RAM you need and how much RAM is available. There are lot's of other ways that other processes can affect your process's run time. This is just one example.
To answer your 3 points:
sum_exec_runtime is the actual time the scheduler ran the process including system time
If you count switching to the kernel as the process giving up the CPU, then yes, but it does not necessarily mean a different user process may get the CPU back once the kernel is done.
I think I've already answered this question that there are lot's of factors.
I have a program that takes about 1 second to run and takes a file as input and produces another file as output. Problem is I have to be able to process about 30 files a second. The files to process will be available as a queue (implemented over memcached) and don't have to be processed exactly in order, so basically an instance of the program checks out a file to process and does so. I could use a process manager that automatically launches instances of the program when system resources are available.
At the simple end, "system resources" will simply mean "up to two processes at a time," but if I move to a different machine make this could be 2 or 10 or 100 or whatever. I could use a utility to handle this, at least. And at the complex end, I would like to bring up another process whenever CPU is available since these machines will be dedicated. CPU time seems to be the constraining resource - the program isn't memory intensive.
What tool can accomplish this sort of process management?
Storm - Without knowing more details, I would suggest Backtype Storm. But it would probably mean a total rewrite of your current code. :-)
More details at Tutorial, but it basically takes tuples of work and distributed them through a topology of worker nodes. A "spout" emits work into the topology and a "'bolt" is a step/task in the graph where some bit of work takes place. When a bolt finish it's work, it emits same/new tuple back into the topology. Bolts can do work in parallel or series.
The BOINC client (does distributed processing jobs like SETI#home does) is able to turn processing on or off based on whether other processes are using a certain percentage of CPU time. That is, if the user starts to do some work and their processes start using 60% CPU, BOINC can pause to avoid interfering with the user's work.
I would like to do the same thing (monitor CPU usage by other processes). The difficulty as I see it is not monitoring CPU usage, but rather making sure that the information isn't skewed by my own usage. For example, if my process is using a ton of CPU time it may prevent another process from using enough to trigger the pause.
Can someone point me in the right direction? Even a suggestion for what to search for would be useful. I'm not really sure what this feature would be called.
You can use NSTask to set the 'nice' value of the process when your process starts.
Also [[NSThread mainThread] setThreadPriority:0.0]
where priority value is between 0.0 and 1.0 is a Cocoa API which may save you frakking about with sudo