Can massif dump log incrementally - valgrind

I use the command
valgrind --tool=massif --threshold=1 <bin>
The command only generates a massif.out. file after I close the test program. Is there a way to let massif dump the file incrementally during the test program runs?

The file produced at the end contains the status of the memory at different moments of the program run. The output file can then visualised various ways e.g. using ms_print or massif-visualizer.
These will show the evolution of the memory, and so should correspond to your request of seeing 'incremental' dumps.
You can also if you want trigger massif dump yourself during execution, typically using vgdb from a shell window. See http://www.valgrind.org/docs/manual/ms-manual.html#ms-manual.monitor-commands for more information.

Related

Valgrind massif combine snapshots

I'm trying to profile an application with massif tool. I run it with this command:
./valgrind.bin --tool=massif --stacks=yes --heap=yes --trace-children=yes --vgdb=yes /usr/bin/agl_proxy
The application runs for a very long time. Generally, when application receives interrupt signal it would cleanly finish and massif would generate a profile file with many snapshots:
desc: --stacks=yes --heap=yes
cmd: /usr/bin/app
time_unit: i
#-----------
snapshot=0
#-----------
time=0
mem_heap_B=0
mem_heap_extra_B=0
mem_stacks_B=0
heap_tree=empty
#-----------
snapshot=1
#-----------
time=4501979
mem_heap_B=0
mem_heap_extra_B=0
mem_stacks_B=1480
heap_tree=empty
However, this particular application just dumps stock and hangs without properly exiting and thus without properly generating the result file. I can use vgdb to get a snapshot while the application is running. But it generates only one snapshot.
Is there any way to combine the snapshots? I've tried adding the snapshot into a file with snapshot=# header, but the MassifVisualizer complains about the format. Perhaps there is an option to combine them or some tool.
Using vgdb, you can ask massif to dump all snapshots with the below monitor request:
all_snapshots [<filename>] requests to take all captured snapshots so far and save them in the given <filename> (default massif.vgdb.out).

Automating a task

I'd like to automate a very simple task.
All I'm doing is exporting projecting files from an application called "solmetric pv analyzer." But I need to do it about 100 times.
Not sure if this information will help, but in order to export the projects, I need to load them into the program and then File->export traces for entire system.
I'd use something like AutoHotKey, but the sizes of the files vary greatly, so the export time does as well and I don't want it to wait such a long time do to each file.
On top of that, I'm stuck on windows XP on a computer with limited processing power.
Windows XP SP2
1 GB RAM
Looking at the flow, if I had to do it - I would use Sikuli. It is quite user friendly and
automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI's internal or source code.
And does fit well under your hardware requirements
Windows XP SP2 1 GB RAM
since it only needs about 200MB memory to start. Once you create your script, I'm sure that the execution will take even less than that.
Aiming at full answer - you can even schedule the execution of the scripts via PowerShell/batch files. Here are the CLI arguments that you can use:
usage:
Sikuli-IDE [--args <arguments>] [-h] [-r <sikuli-file>] [-s] [-t <sikuli-file>]
--args <arguments> specify the arguments passed to Jython's sys.argv
-h,--help print this help message
-r,--run <sikuli-file> run .sikuli or .skl file
-s,--stderr print runtime errors to stderr instead of popping up a message box
-t,--test <sikuli-file> run .sikuli as a unit test case with junit's text UI runner

Dummy step is not work in Job

Each transformation will create an csv file in a folder, and I want to upload all of them when transformations done. I add a Dummy but the process didn't work as my expectation. Each transformation will execute Hadoop Copy Files step. Why? And how could I design the flow? Thanks.
First of all, if possible, try launching the .ktr files in parallel (right click on the START Step > Click on Launch Next Entries in parallel). This will ensure that all the ktr are launched parallely.
Secondly, You can choose either of the below steps depending upon your feasibility (instead of dummy step):
"Checks if files exist" Step: Before moving to the Hadoop step, you can do a small check if all the files has been properly created and then proceed with your execution.
"Wait For" Step: You can give some time to wait for all the step to complete before moving to the next entry. I don't suggest this since the time of writing a csv file might vary, unless you are totally sure of some time.
"Evaluate files metrics" : Check the count of the files before moving forward. In your case check if the file count is 9 or not.
I just wanted to do a some sort of checking on the files before you copy the data to HDFS.
Hope it helps :)
You cannot join the transformations like you do.
Each transformation, upon success, will follow to the Dummy step, so it'll be called for EVERY transformation.
If you want to wait until the last transformation finishes to run only once the Hadoop copy files step you need to do one of two things:
Run the transformations in a sequence, where each ktr will be called upon success of the previous one (slower)
As suggested in another answer, launch the KTRs in parallel, but with one caveat: they need to be called from a sub-job. Here's the idea:
Your main job has a start, calls a sub-job and upon success, calls the Hadoop copy files step.
Your sub-job has a start, from which all transformations are called in different flows. You use the "Launch next entries in parallel" so all are launched at once.
The sub-job will keep running until the last transformation finishes and only then the flow is passed to the Hadoop copy files step, which will only be launched once.

IOMeter doesn't write log files due to full disk

Does anyone have a workaround for IOMeter not writing logs to disk? I believe this is because the iobw.tst file takes up the whole disk. I have had the test running, then manually created a temporary 1MB file while the disk was filling up, then deleted that 1MB file after the disk is full and while the reads and writes are being performed and this consistently produces the full log file for the test. Similarly, clearing the Recycle Bin or temporary files at this time produces the same result.
Does anyone know of a way to reserve this space for the logfile using a configuration file or something along these lines? IOMeter is part of an automated suite of tests that I'm working on and this issue is preventing full automation.
You have to compile Dynamo with "DETAILS" and/or "DEBUG" flags "on".
Then dynamo will store all the info into ~/std.out log (if you're under linux)

use Archive Utility.app from command line (or with applescript)

I would like to use Archive Utility.app in an app I'm writing to compress one or more files.
Doing (from the command line):
/System/Library/CoreServices/Archive\ Utility.app/Contents/MacOS/Archive\ Utility file_to_zip
Does work but it creates a cpgz file, I could live with that (even though .zip would be better) but the main problem is that I am not able to compress 2 files into 1 archive:
/System/Library/CoreServices/Archive\ Utility.app/Contents/MacOS/Archive\ Utility ~/foo/a.txt ~/bar/b.txt
The above command will create 2 archives (~/foo/a.txt.cpgz and ~/bar/b.txt.cpgz).
I cannot get this to do what I want either:
open -a /System/Library/CoreServices/Archive\ Utility.app --args xxxx
I'd rather not use the zip command because the files that are to be compressed are rather large so it would be neat to have the built in progress bar.
Or, could I use Archive Utility programmatically?
Thanks.
Archive Utility.app uses the following to create its zip archives:
ditto -c -k --sequesterRsrc --keepParent Product.app Product.app.zip
Archive Utility.app isn't scriptable.
The -dd/--display-dots option will cause the command-line zip utility to displays progress dots when compressing. You could parse the output of zip for your progress bar. zip will also display dots if it takes more than five seconds to scan and open files.
Better would be to integrate a compression library, such as zlib or libbzip2. Both of those let you compress a portion of the data at a time. You'll need to handle progress bar updates, which you can do after compressing each block of data.
How about Automator? The "Create Archive" action would work.
I have used Archive Utility to decompress files from Applescripts:
tell application "Archive Utility" to open filePath
However, this only tells Archive Utility to start decompressing. It will complete in its own time and the applescript will continue to execute without waiting for the decompression to finish. Archive Utility will not tell the applescript when it is done.
You can use the above line if Archive Utility is already running. It will add the new file to its queue and decompress when ready.
Also, Archive Utility's preferences will control how it accomplishes the goal. For example: it might decompress to a different folder or delete the original. You can run it as a program and change the preferences if that helps.
I have not used this to get Archive Utility to compress files.