OPEN and WRITE to files in FORTRAN DLL - file-io

I am writing in fortran and compiling using the g95 compiler.
I need to have a log file output to a DLL i am writing, that is currently linking and running with the master program, but producing incorrect results. I don't know much about FORTRAN, but i did get the following code to produce output in an EXE i compiled:
OPEN(UNIT=3, FILE='LOG.txt', STATUS='NEW')
WRITE(3,*) "the gospel of PTP is bestowed upon the file."
CLOSE(3)
this works in a stand alone EXE, when i run it, it produces a file with the string inside. But when i try to include it in the DLL i am working on, it crashes everything. when i comment it back out, everything runs and works again, but obviously doesn't produce the desired output.
Any ideas? Any FORTRAN or g95 people?

A guess which might help, or might not, I have rarely used Fortran DLLs to write anything directly:
To where do you expect the DLL to write the file 'LOG.txt' ? Is it perhaps trying to write into a location it is forbidden to write to ? Why that would crash your program I'm not very sure, but it's something for you to check. I expect that you ran the EXE version of your code from one of your user directories.
And, a comment:
In general avoid single-digit unit numbers in Fortran. Most o/s use them for stdout, stderr, etc, and while there are usual assignments (eg stdout is usually 5 I think, and stderr 6) these are not defined in the Fortran standard and compiler-writers are free to use unit numbers as they see fit.

Related

What is a Interpreter to be exact?

According to Wikipedia an interpreter uses at least one of the following Strategies:
Parse the source code and perform its behavior directly;
Translate source code into some efficient intermediate representation or object code and immediately execute that;
Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.
So is a program that reads code and executes it directly an interpreter? Does an interpreter need to convert code into binary? Does a compiler need to convert code into binary?
So is a program that reads code and executes it directly an interpreter?
Yes. By definition, an interpreter reads code, then performs what the code tells it to do. Unlike an interpreter, a compiler reads the code then makes an executable file that can be run later.
Does an interpreter need to convert code into binary?
Not always. An interpreter may just read the input code then perform what the code tells it to do, but another type of interpreters use JIT Compilation. Interpreters that use JIT Compilation turn the input code into machine code, but do not make an executable file. Instead, they run the code in memory then throw it away after it has been run. JIT Compilation can be faster than traditional interpreters.
Does a compiler need to convert code into binary?
Yes. In order to create an executable file, a compiler must first read the input code then turn it into something the computer can understand (machine code). This first step is just like JIT Compilation. Unlike JIT Compilation, compilers do not run the machine code it produces, and does not throw it away. Instead, it writes it to a file (called an executable file, or just executable) in a specific format for the OS it is being compiled on. This specific format is why Windows programs cannot run on Linux, and vice-versa.

how to compile perl6 program to generate bytecode?

I am trying to understand perl6 and its changes than perl5. I come to know that perl 6 is compiled languages but I am not getting how? It is not generating any intermediate code (directly executable or jvm bytecode)?
I am not getting any option to do the same. How to do it?
Currently I am able to directly execute my code.
$ perl6-j hello.p6
Hello world
I am following https://github.com/rakudo/rakudo
You can use --target= on the perl6 command line to see a human readable trace of each stage of the compiler. On JVM if you wish to have a "compiled" bytecode output you can use --target=jar and then take a look inside there. But ultimately Perl 6 compiles on the fly unless asked otherwise. It leaves a bytecode representation cached in library path directories of each "CompUnit", so that the compile step is faster next time. This can be seen in .precomp directories. The precomp cache is very tricky to use by hand due to how Perl 6 hashes and indexes all comp units. This is so libraries with the same name but different version and author can sit side by side. On MoarVM there is no equivalent to --target=jar but in the .precomp directory you can see the raw bytecode files that can be directly executed by moar if you link the runtime setting.
Updating the answer for this as this is now supported.
To generate the bytecode for a perl6 program, run perl6 --target=<backend> --output=foo foo.pl6. You can use mbc, jvm, or js as your target backend. The bytecode will be written to the file foo.
Writing bytecode to a file both for modules and programs is not official supported yet. Hence the lack of documentation for --target.

Can different file extension executables be disassembled into the same instruction set OpCode?

This is a question from someone clueless about disassembly and decompiling in general, so bear with me. I am curious to know if executable file extensions (for example, listed in http://pcsupport.about.com/od/tipstricks/a/execfileext.htm ) can be disassembled into assembly language so then I can analyze opcode patterns across files.
My logic is that once all these different file extensions are in opcode form, they are all on the same level, regardless of language barriers, etc, so it would be easier to analyze them.
How feasible is this?
EDIT: Example. I have an .exe file and an .app file. If I disassembled both, could I compare them across opcode on the same OS? If not, how about executable files from the same OS. For example, for all executable files on Windows, if I disassembled both, could I compare opcode across each?
EDIT2: How will obfuscators affect my efforts?
In short, no.
The problem is that there is no practical universal instruction set. In practice, every computer architecture has its own instruction set (or sometimes several instruction sets). A native executable format like .exe is compiled to the machine's instruction set, which will differ based on the ISA targeted.
I'm not familiar with the .app format, but it appears to be some sort of archive containing executable code. So if you have an exe and app targeting the same ISA, you could conceivably diassemble and compare.
Obfuscation makes things much harder because it is difficult to get a reliable disassembly, let alone deal with stuff like self modifying code.

Extract Objective-c binary

Is it possible to extract a binary, to get the code that is behind the binary? With Class-dump you can see the implementation addresses, but is it possible to also see the code thats IN the implementation addresses? Is there ANY way to do it?
All your code compiles to single instructions, placed in the text section of your executable. The compiler is responsible for translating your higher level language to the processor specific instructions, which are simpler. Reverting this process would be nearly impossible, unless the code is quite simple. Some problems are ambiguity of statements, and the overall readability: local variables, for instance, will be nothing but an offset address.
If you want to read the disassembled code (the instructions of which the higher level code was compiled to) use this command in an executable:
otool -tV file
You can decompile (more accurately, disassemble) a binary and get it's assembly, but there is no way to get back the original Objective-C.
My curiosity begs me to ask why you want to do this!?
otx http://otx.osxninja.com/ is a good tool for symbolicating the otool based disassembly
It will handle both x86_64 and i386 disassembly.
and
Mach-O-Scope https://github.com/smorr/Mach-O-Scope is a a tool built on top of otx to dump it all into a sqlite3 database for browsing and annotating.
It won't give you the original source -- but it will get you pretty close providing you with the messages that are being sent around in methods.

How few a files does it take to load a program on Linux?

The (hypothetical for now) situation is the user of my system is going to be given a chunk of C code and needs my system to compile and run it in a chroot sandbox that is generated on the fly and I want to require the fewest files in the box as possible. I'm only willing to play with compiler and linker settings (e.g. static link everything I can expect to be able to find) and make some moderate restriction on what the code can expect use (e.g. they can't use arbitrary libs).
The question is how simple can I get the sandbox. Clearly I need the executable, but what about an ELF loader and a .so for the system calls? Can I dump either of them and is there something else I'll need?
You don't need anything except the executable to run a statically-linked hello world. You will, of course, need a lot more to compile it.
You can test this fairly easily, I did so with the following trivial C code:
#include <stdio.h>
int main() {
puts("Hello, world\n");
return 0;
}
compile it with gcc -static. Then make a new directory (I called it "chroot-dir"), move the output ("hello") into it. So the only file in the chroot is now the executable. Then run chroot chroot-dir ./hello, and you'll get Hello, world.
Note that there are some things that can not be compiled statically. For example, if your program does authentication (through PAM), PAM modules are always loaded dynamically. Also note that various files in /etc are needed for certain calls; any of the getpw* and getgr* functions, the domain name resolution functions, etc. will require nsswitch.conf (and some shared objects, and maybe more config files, and sometimes even more executables, depending on the lookup methods configured.) /etc/hosts, /etc/services, and /etc/protocols will probably be quite useful for any networking.
One easy way to figure out what files a program uses is to run it under strace. You must trust the program first, of course.
no need for any ELF loader. to check what dynamic libraries you need do ldd <executable>. If you manage to static compile everything, it won't need any .so. Beyond that, it's only about the data and directory structure your program might need.
But all this is only if you use the /usr/bin/chroot command; if you make your program call int chroot(const char *path); itself after making sure all dynamic libraries are loaded, they you won't need anything on the directory sandbox. not even the executable itself.
edit: A different idea: use TCC (or rather, libtcc to compile, link, load and run the given C chunk. run the whole process inside an 'outer' chroot jail, dropping to an 'inner' (empty) one just before execution. (of course, execute in a fork(), or you won't be able to break out of the 'inner' jail to the 'outer' one). You might also take advantage of libtcc's bound's checked execution.