Why is writing to a file from my python script getting overwritten [duplicate] - io-redirection

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
This might be a simple OS question rather than programming question so we might need to migrate this to another community.
I made a mistake and came across some weird behavior that I'd like to understand what's happening.
In my python script, I call a java program someone wrote. They use Logger to log some msgs and I'm writing the java program output to outfile. I also want to redirect the output of my script to another file b/c I don't want to search thru the sea of msgs from the java program.
out = sys.argv[2]
...
...
with open(out, 'w') as outfile:
# call some java programs via Popen but log
# some msgs after all the java programs have run
cmd = ["java", "-cp", "blah.jar", "com.main.blah", "param1", "param2"]
proc = subprocess.Popen(cmd, stdout=outfile)
proc.wait()
....
....
print "got here to test where this msg gets written"
When running my script, I redirected stdout and stderr to the same file name by accident.
python script.py input output > output 2>&1
Due to this accident, I came across this weird behavior. It seems like the redirected stdout and stderr gets written at the beginning of the file "output" rather than the end and thus overwrites any existing msgs from the java program
I'm just curious why this is happening. I'm guessing that the file pointer for the python script kept moving forward because the java program kept writing log msgs but the file pointer for the OS redirect didn't move b/c it didn't know anything was being written by the script, and thus, when the redirect's turn came, it wrote where it thinks is the end of the file. Is this correct? Hopefully someone can enlighten me on this situation. Thanks.
P.S. I wrote the command "correctly" and the java program output did go to one file and the redirect did go to another file
UPDATE:
Thanks to all the views and comments/answers. I think I didn't word my question correctly so I might delete this question. I don't think this is a duplicate question. This is not a question on appending or writing both stdout and stderr to the same file. I know how to do that. This is a question on having two things trying to write to the same file but the output gets all mixed up. I had a guess as to what was happening and put that in my original post but it seems like what I wrote isn't clear.

In bash:
> will overrite a file
>> will append to a file

Related

Giving unique names to GDB logging files in a script

I would like to log GDB command output to a log file.
This is done using the following command:
set logging file outfulfile.txt
But, instead of simple outfulfile.txt, I would like to give a unique name to the file; for example outfulfile-PID.txt. This is because I will have several processes simultaneously producing the output and I want each one to log to its own unique file.
How can I programmatically derive such a file name in a GDB script?
There area few ways.
One relatively simple way is to use gdb's "eval" command. It substitutes arguments like printf, and then executes the result as a gdb command. This is a new-ish command.
If you don't have "eval" you might still have Python scripting. In this case you can write a short (one line) Python script like:
(gdb) python gdb.execute('set logging file ' + .. your logic here ..)
If you don't have Python scripting, then your gdb is really old and you ought to upgrade. However you can still maybe accomplish what you want, just with some awful gyrations using "shell" and writing out a script that you then "source" back into gdb. Though this technique seems somewhat hard in this case.

How to run a shell command in cocoa and get output?

After repeated searching I have not found an elegant solution to this issue: how to run a shell command in obj-c and get it's output. I have read many questions regarding this, but they all fail to answer my question.
Some get the exit value ( Get result from shell script objective-c ) others only run the command ( Cocoa/ Objective-C Shell Command Line Execution ), and finally others have me write the output to a file ( Execute a terminal command from a Cocoa app ).
I really would like to avoid writing/reading a file as it not a very clean solution.
Is there no way to read the output directly in obj-c? If so how?
The code from "doshellscript" from the first link (Get result from shell script objective-c) actually does return an NSString with the output of the command. If it's not working for you, maybe the command is outputting over stderr rather than stdin? Have you tried this yet? The standard mechanism for running commands in Cocoa is NSTask, so definitely at least start there.
Look at the class PRHTask. It is a replacement of NSTask, with completion blocks. https://bitbucket.org/boredzo/prhtask
Extract from the header:
First, rather than having to set your own pipe for standard output and error, you can tell the task to accumulate the output for you, and retrieve it when the task completes.
Second, when the process exits, rather than posting an NSNotification, a PRHTask will call either of two blocks that you provide. You can set them both to the same block if you want.
If your task needs admin privileges, you might want to look at STPrivilegedTask.

Can I write IO statements inside a dll?

This is a newbie question. Can I write statements like printf or open a file inside a dll?
Opening a file is certainly possible in all cases.
However, using printf() depends on whether the executable calling your DLL is a console program or not. If it's a GUI program, then there is nowhere for the printf() output to go, so it will not appear. If it's a console program, you'll see the output on the console.
Your question and its title are asking two different questions. But the answer to the question body is yes -- libraries can certainly use those functions.
printf might not do anything though, depending on whether standard output has been closed by the program using the library.

strange behavior

I wrote simple script test
echo hello #<-- inside test
if I press one time enter after hello, my script will run, if I don't press - it will not, if two times I'll receive my hello and + command was not found, can somebody please explain me this behavior thanks in advance
This is not a part of the code, this is actual code
and I run it on C-Shell, via editor of Windows
command:
source ./test
Some points:
You should not ask questions tagged with both the [csh] and [bash] tags - these are completely different programs and implement completely different script programming languages
You should never name a script (or any other program) test, as this is the name of a built-in feature of bash
Post the actual code you are asking about, without annotations and show how you run it.
I have tried a similar case. I wrote a script like yours, saved it using Windows Notepad (with CRLF line terminators) and run in bash with the same effect as yours in csh. The problem is bash (so csh as well) does not understand Windows' 2-byte line terminators, which are interpreted as commands, which obviously do not exist.
The solution is: change your editor or configure your current editor to use unix line terminators.
You can try for example Notepad++. Remember to change the line terminators to LF.

Best way to test command line tools? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have a large collection of command-line utilities that we write ourselves and use frequently. At the moment, testing them is very cumbersome and consequently, we don't do as much testing as we aught to.
I am wondering if anyone can suggest good techniques or tools for doing a good job of this kind of thing.
This is UNIX.
I recommend structuring your command line tool's code so that the command line utility is a client to a library of functions and/or classes.
Rather than simply using std::cout to print output, have the libraries function take an ostream reference that defaults to std::cout. When you are testing, provide a std::stringstream to collect the output.
Finally, simply compare your utility's output with expected results using your favorite unit testing framework.
(I apologize for the C++ specific example... I'm sure there are ways to do similar things in other languages too).
You can write tests that resemble an interactive shell session using Cram. It has flexible test specification format that allows you to match output using Perl regex or shell-like wildcards. Cram will replay commands from the test, compare output to the reference, and report differences.
Aruba is a Cucumber extension for testing command line applications written in any programming language.
To use it, you will need ruby to run the tests, but the purpose of aruba is to provide a library of pre-defined step definitions so that you won't need to write any ruby code to make a workable test suite. (Though at some point you probably will want to write a bit of ruby to make a few custom steps.)
You can see a sophisticated example of a command line tool tested with aruba here: jingweno/gh
You should be able to call them from a shell script (batch file, on MS operating systems), redirect the output to a file, then scan the file programmatically to ensure that it has the correct output. I'm not aware of a testing framework that automates this for you, but it should be fairly straight forward to set it up yourself.
Bats (Bash Automated Testing System) by Sam Stephenson. It is tiny, written purely in shell and has a nice set of features.
Previously suggested Aruba looks interesting, but in some cases it might be quiet an overkill in terms of dependencies (ruby, cucumber)
I did a little bit of this (a loooong time ago hehe) using Expect to check that what happened was what I, umm, expected
I have developed a tool "Exactly"
https://github.com/emilkarlen/exactly
It executes the thing to test in a temporary sandbox directory.
The README contains a number of examples.
A test of a hypotethical program "classify-files-by-moving-to-appropriate-dir" can look like this:
[setup]
dir input
dir output/good
dir output/bad
file input/a.txt = <<EOF
GOOD contents
EOF
file input/b.txt = <<EOF
bad contents
EOF
[act]
classify-files-by-moving-to-appropriate-dir GOOD input/ output/
[assert]
dir-contents input empty
exists output/good/a.txt : type file
dir-contents output/good num-files == 1
exists output/bad/b.txt : type file
dir-contents output/bad num-files == 1
You can do this from a batch file oder windows scripting host.
But i promise to use a task scheduler like (http://www.splinterware.com/products/wincron.htm) or other free/professional software.
There you can easy copy/paste the commandline-parameters which you should vary on, when you wanna test your software for about many 100 times?!
You could use perl with Test::more library, which provides a great framework for testing CLIs.
Though primarily designed for unit testing, you could extend it to test user workflows.
Some of the methods:
# Various ways to say "ok"
ok($got eq $expected, $test_name);
is ($got, $expected, $test_name);
isnt($got, $expected, $test_name);
# Rather than print STDERR "# here's what went wrong\n"
diag("here's what went wrong");
like ($got, qr/expected/, $test_name);
unlike($got, qr/expected/, $test_name);
cmp_ok($got, '==', $expected, $test_name);
command-lineautomationtestingperl-testing