How do I get the .write function to work with PyScripter? - file-io

To reproduce problem:
In the PyScripter editor, write:
outf = open('output.txt', 'w')
outf.write('hello, world!')
Result:
For me at least, here is what happens, when output.txt does not already exist.
output.txt is created
output.txt will contain no data or text at all when opened by any text editor.
So my question is, how do I make this work?
Other information:
I am using PyScripter 2.5.3.0 x64, with Python 2.7.3, 64 bit as the interpreter.
Printing to the console works fine, all other functions and code work fine.
When I use python in Command Prompt, I can write to output files fine. My problem is only in PyScripter.
Thanks,
DS

The question was already resolved in comments, but I will post a complete answer regardless.
Writes to files may be delayed. To make sure they aren't, force a flush by closing the file:
outf.close()
If you don't want to explicitly call close, try using with ... as:
with open('output.txt', 'w') as outf:
outf.write('hello, world!')

Related

How do I tell Octave where to find functions without picking up other files?

I've written an octave script, hello.m, which calls subfunc.m, and which takes a single input file, a command line argument, data.txt, which it loads with load(argv(){1}).
If I put all three files in the same directory, and call it like
./hello.m data.txt
then all is well.
But if I've got another data.txt in another directory, and I want to run my script on it, and I call
../helloscript/hello.m data.txt
this fails because hello.m can't find subfunc.m.
If I call
octave --path "../helloscript" ../helloscript/hello.m data.txt
then that seems to work fine.
The problem is that if I don't have a data.txt in the directory, then the script will pick up any data.txt that is lying around in ../helloscript.
This seems a bit fragile. Is there any way to tell octave, preferably in the script itself, to get subfunctions from the same directory as the script, but to get everything else relative to the current directory.
The best robust solution I can think of at the moment is to inline the subfunction in the script, which is a bit nasty.
Is there a good way to do this, or is it just a thorny problem that will cause occasional hard to find problems and can't be avoided?
Is this in fact just a general problem with scripting languages that I've just never noticed before? How does e.g. python deal with it?
It seems like there should be some sort of library-load-path that can be set without altering the data-load-path.
Adding all your subfunctions to your program file is not nasty at all. Why would you think so? It is perfectly normal to have function definitions in your script. The only language I know that does not do this is Matlab but that's just braindead.
The other alternative you have is to check that the input file argument, data.txt exists. Like so:
fpath = argv (){1};
[info, err, msg] = stat (fpath);
if (err)
error ("could not stat `%s' : %s", fpath, msg);
endif
## continue your script knowing the file exists
But really, I would recommend you to use both. Add your subfunctions in your main program, the only reason to have it on separate file is if you plan on sharing with other programs, and always check input arguments.

Output to stderr in REBOL2?

I am trying to get my CGI scripts running on my web host (which runs on FreeBSD). To debug why I keep getting the dreaded "premature end of script headers" error, their support recommended that I redirect all my output to stderr, rather than printing it. Looking up how to do this, I came across a very old RAMBO ticket about it, but it looks like it was never implemented.
Per some of the answers to this question, it seems like I should be able to do a call {echo Hello, world >&2} to achieve this, but it doesn't work.
How can I write to stderr in REBOL2?
For my CGI-specific scenario, I have a truly awful workaround. Since writing to stderr in Perl (with which I am entirely unfamiliar) is a one-liner, I'm currently calling the REBOL script from Perl and printing its output to stderr from there:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
# Note the backticks
my $the_string = `/home/public/rebol -csw test-reb.cgi`;
print STDERR $the_string;
This webpage has some suggestions http://www.liquidweb.com/kb/apache-error-premature-end-of-script-headers/
to solve your real problem. Perhaps you did not have the headers printed as first thing in your script, this must be the first thing to do. Maybe the rights are not sufficient, or the .r file type was not properly added in your .htaccess as cgi able file. Your (correct!) rebol core exe has not the correct rights. Or your script ends up in an endless loop?
Some hints to redirect errors for Rebol cgi script:
http://www.rebol.com/docs/core23/rebolcore-2.html#section-6.2
Better late than never... I've just implemented it for Rebol3 in my Rebol fork.
https://github.com/Oldes/Rebol-issues/issues/2468
The syntax will be probably changed a little bit, because I don't like that the system console port is named input, although it is not just for the input.
So far it is:
print 1 ;<- std_out
modify system/ports/input 'error on
print 2 ;<- std_err
modify system/ports/input 'error off
print 3 ;<- std_out

doxygen latex make fails for input encoding error

I have a git repo project in eclipse which I have been documenting using doxygen (v1.8.4).
If I run the latex make ion a fresh clone of the project it runs fine and the PDF is made.
However, if I then run a doxy build, which completes OK, then attempt to run the latex make, it fails for
! Package inputenc Error: Keyboard character used is undefined
(inputenc) in inputencoding `utf8'.
See the inputenc package documentation for explanation.
Type H <return> for immediate help.
...
I have tried switching the encoding of the doxyfile by setting DOXYFILE_ENCODING to ISO-8859-1 with no change in the result... How can I fix this?? Thanks.
EDIT: I have used no non-UTF-8 chars as far as I know in my files, the file referenced before the error is very short and definitely doesn't have non-UTF-8 chars in it. I've even tried clearing my latex output dir and building from scratch with no luck...
EDIT: Irealised that the doxy build only appears to run correctly. It doesnt show any errors, but it should, for example run DOT and build about 10 graphs. The console output says Running dot, but it doesn't say generating graph (n/x) like it should when it actually makes the graphs...
Short answer: So by a slow process of elimination I found that this was caused by a single apostrophe in a file that had appeared to be already built and made without error!!
Long answer: Firstly I used used the project properties to flip the encoding from the default Cp1252 to UTF-8. Then I started removing files one-by-one until rebuilding and remaking after each removal, until the make ran successfully. I re-added all files, but deleted the content in the most recently removed file and tested the make - to confirm it was this file and only this file that caused the issue. the make ran fine. So I pasted the content back into the empty file, and started deleting smaller and smaller sections of the file, again rebuilding and remaking each time until I was left with a good make without the apostrophe and a bad one with it... I simply retyped the apostrophe (as this would then force it to be a UTF-8 char) and success!! Such an annoying bug!
Dude you made it a hard way. Why not use python to do the work for you:
f = open(fn,"rb")
data = f.read()
f.close()
for i in range(len(data)):
ch = data[i]
if(ch > 0x7F): # non ASCII character
print("char: %c, idx: %d, file: %s"%(ch,i,fn))
str2 = str(data[i-30:i+30])#.decode("utf-8")
print("txt: %s" % (str2))

In Lua, how to print the console output into a file (piping) instead of using the standard output?

I workin' with Torch7 and Lua programming languages. I need a command that redirects the output of my console to a file, instead of printing it into my shell.
For example, in Linux, when you type:
$ ls > dir.txt
The system will print the output of the command "ls" to the file dir.txt, instead of printing it to the default output console.
I need a similar command for Lua. Does anyone know it?
[EDIT] An user suggests to me that this operation is called piping. So, the question should be: "How to make piping in Lua?"
[EDIT2] I would use this # command to do:
$ torch 'my_program' # printed_output.txt
Have a look here -> http://www.lua.org/pil/21.1.html
io.write seems to be what you are looking for.
Lua has no default function to create a file from the console output.
If your applications logs its output -which you're probably trying to do-, it will only be possible to do this by modifying the Lua C++ source code.
If your internal system has access to the output of the console, you could do something similar to this (and set it on a timer, so it runs every 25ms or so):
dumpoutput = function()
local file = io.write([path to file dump here], "w+")
for i, line in ipairs ([console output function]) do
file:write("\n"..line);
end
end
Note that the console output function has to store the output of the console in a table.
To clear the console at the end, just do os.execute( "cls" ).

Exporting Mathematica Print[] Output to a .txt file

I have a large Mathematica notebook that uses Print[] commands periodically to output runtime messages. This is the only output (aside from exported files) that this notebook generates. Is there any way I can automate the export of this output to a .txt file without having to re-write the Print[] commands?
According to the documentation, Print outputs to the $Output channel which is a list of streams. So, at the beginning of the notebook,
strm = OpenWrite["output.log"];
AppendTo[ $Output, strm ];
and at the end of the notebook
Close[strm];
Note, if execution is interrupted prior to closing the stream, then you'll have to do it manually. Also, the above code will overwrite prior data in "output.log," so you may wish to use OpenAppend, instead.
Edit: to guarantee that Abort will be called, consider using one of the techniques outlined here.
You want the PutAppend command.