I'm trying to profile an application with massif tool. I run it with this command:
./valgrind.bin --tool=massif --stacks=yes --heap=yes --trace-children=yes --vgdb=yes /usr/bin/agl_proxy
The application runs for a very long time. Generally, when application receives interrupt signal it would cleanly finish and massif would generate a profile file with many snapshots:
desc: --stacks=yes --heap=yes
cmd: /usr/bin/app
time_unit: i
#-----------
snapshot=0
#-----------
time=0
mem_heap_B=0
mem_heap_extra_B=0
mem_stacks_B=0
heap_tree=empty
#-----------
snapshot=1
#-----------
time=4501979
mem_heap_B=0
mem_heap_extra_B=0
mem_stacks_B=1480
heap_tree=empty
However, this particular application just dumps stock and hangs without properly exiting and thus without properly generating the result file. I can use vgdb to get a snapshot while the application is running. But it generates only one snapshot.
Is there any way to combine the snapshots? I've tried adding the snapshot into a file with snapshot=# header, but the MassifVisualizer complains about the format. Perhaps there is an option to combine them or some tool.
Using vgdb, you can ask massif to dump all snapshots with the below monitor request:
all_snapshots [<filename>] requests to take all captured snapshots so far and save them in the given <filename> (default massif.vgdb.out).
Related
Our client uses an automation software called ActiveBatch (by Advanced Systems Concepts, Inc.). They're currently using ActiveBatch v8 and is now on the the process of migrating the automated jobs to a newer ActiveBatch v11.
Most the jobs have no problems coping with the newer software and they're running OK as of this writing. However, there is one job that is unable to run, rather, initialize in the first place. This job runs OK on v8. Whenever this job is being run on v11, it produces an error message:
%ABAT-W-CREPRCERR, error creating batch process for job %1
Quite self-explanatory; means the process for the particular job was not created. As per checking the user manual, it stated that the job's log file might explain more why the error occurred. Problem is, the log file is not very helpful as it only show magic numbers shown below:

Further readings states that it's Byte Order Mark for UTF-8. I don't know much about this stuff but since the log file only contains those characters, I'm not sure they're helpful at all.
Another thing, if I run the job manually (running EXE via Windows Explorer), no problems will be encountered and it will be a success. The job by the way is a Power Builder 9 application.
I use the command
valgrind --tool=massif --threshold=1 <bin>
The command only generates a massif.out. file after I close the test program. Is there a way to let massif dump the file incrementally during the test program runs?
The file produced at the end contains the status of the memory at different moments of the program run. The output file can then visualised various ways e.g. using ms_print or massif-visualizer.
These will show the evolution of the memory, and so should correspond to your request of seeing 'incremental' dumps.
You can also if you want trigger massif dump yourself during execution, typically using vgdb from a shell window. See http://www.valgrind.org/docs/manual/ms-manual.html#ms-manual.monitor-commands for more information.
I'd like to automate a very simple task.
All I'm doing is exporting projecting files from an application called "solmetric pv analyzer." But I need to do it about 100 times.
Not sure if this information will help, but in order to export the projects, I need to load them into the program and then File->export traces for entire system.
I'd use something like AutoHotKey, but the sizes of the files vary greatly, so the export time does as well and I don't want it to wait such a long time do to each file.
On top of that, I'm stuck on windows XP on a computer with limited processing power.
Windows XP SP2
1 GB RAM
Looking at the flow, if I had to do it - I would use Sikuli. It is quite user friendly and
automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI's internal or source code.
And does fit well under your hardware requirements
Windows XP SP2 1 GB RAM
since it only needs about 200MB memory to start. Once you create your script, I'm sure that the execution will take even less than that.
Aiming at full answer - you can even schedule the execution of the scripts via PowerShell/batch files. Here are the CLI arguments that you can use:
usage:
Sikuli-IDE [--args <arguments>] [-h] [-r <sikuli-file>] [-s] [-t <sikuli-file>]
--args <arguments> specify the arguments passed to Jython's sys.argv
-h,--help print this help message
-r,--run <sikuli-file> run .sikuli or .skl file
-s,--stderr print runtime errors to stderr instead of popping up a message box
-t,--test <sikuli-file> run .sikuli as a unit test case with junit's text UI runner
I have a very large tar file(>1GB) that needs to be checked out and is a precondition for executing any tests.
I cannot have dedicated build server for my tests since tests are going to be executed on slave machines which are disposable.
Checking out a file(>1GB) is not optimal since in this case test execution time would increase because of precondition.What is the best optimal way of solving this problem?
I would dedicate a location on the slaves for that file.
Then in your tests, check if the file is in that location. If not, check it out and move it there. Since this location is outside your normal work area it won't get cleaned, and the file will stay there for the next test execution to use, and you won't need to check it out again.
Of course if the file changes you have to clear those caches. A first option would be to do this manual, alternative you can create a hash of the file and keep that hash in the cash and in your version control. You would then compare only the hashes, and only if those change you would check out the file.
Of course this requires that you have the ability to checkout all the rest of your code without the big file. How to do that obviously depends on the version control system in use.
There are set of log files that have a pattern xxxxxYYY where xxxx -> some text and YYY is a sequence number increasing sequentially by one and wrapping around. Only the last n number of files are available at a given time.
I would like to write a foolproof script that would make sure that all the log files are backed up in another server (via ssh/scp).
Can somebody please suggest a logic/code snippet(perl or shell) for it?
=> The script can run every few minutes to ensure bursts of traffic do not cause log files to miss getting backed up.
=> The roll over needs to be detected so that files are not overwritten in the destination server/directory.
-> I do not have super user either in source or destination boxes. The destiantion box does not have rsync installed and would take too long to get it installed.
-> Only one log file gets updated at a time.
I would look at having cron run an rsync --backup command.