Does OCaml have a way to get the current file/module/script name? Something like:
C/C++'s argv[0]
Python's sys.argv[0]
Perl/Ruby's $0
Erlang's ?FILE
C#'s ProgramName.Environment.CommandLine
Factor's scriptname/script
Go's os.Args[0]
Haskell's System/getProgName
Java's System.getProperty("sun.java.command").split(" ")[0]
Node.js's __filename
etc.
I don't know anything about OCaml but some googling turned up
Sys.argv.(0)
See http://caml.inria.fr/pub/docs/manual-ocaml/manual003.html#toc12
I presume you are scripting in OCaml. Then Sys.argv.(0) is the easiest way to get the script name. Sys module also provides Sys.executable_name, but its semantics is slightly different:
let _ = prerr_endline Sys.executable_name; Array.iter prerr_endline Sys.argv;;
If I run the above line, putting the line in test.ml, by ocaml test.ml hello world, I have:
/usr/local/bin/ocaml - executable_name
test.ml - argv.(0)
hello - argv.(1)
world - argv.(2)
So OCaml toplevel does something fancy against argv for you.
In general, obtaining the current module name in OCaml is not easy, from several reasons:
ML modules are so flexible that they can be aliased, included into other modules, and applied to module functors.
OCaml does not embed the module name into its object file.
One probably possible workaround is to add a variable for the module name by yourself, like:
let ml_source_name = "foobar.ml"
This definition can be probably auto inserted by some pre-processing. However, I am not sure CamlP4 can have the file name of the currently processing source file.
If your main purpose is simple scripting, then of course this pre-processing is too complicated, I am afraid.
let _ =
let program = Sys.argv.(0) in
print_endline ("Program: " ^ program)
And posted to RosettaCode.
In OCaml >= 4.02.0, you can also use __FILE__ to get the filename of the current file, which is similar to Node's __filename and not the same as Sys.argv.(0).
Related
I tried to import System.Directory in my Frege program (In Eclipse) in order to use functions as getDirectoryContent, etc., and it writes me this error :
Could not import module frege.system.Directory (java.lang.ClassNotFoundException: frege.system.Directory)
What do I have to do ?
It is because the module frege.system.Directory doesn't exist in Frege. A good way to find out about a module is to use Hoogle for Frege at this URL: http://hoogle.haskell.org:8081. If we search for that module there, we can see that it doesn't list any module as opposed to, say, if you search for frege.data.List, we would see the module in the result.
Now for the functions you need like getDirectoryContent, if you look at the search result for frege.system.Directory, the first result is about processes and the third and fourth results are about jars and zip files. If you click on the second result, it would open the module frege.java.IO and you can see some relevant functions that might be useful for you (list for example). However the Haskell module you are trying to find is not yet ported to Frege but it should, of course, be possible to port that module backed by native Java implementations.
Update for OP's comment
Here is a simple snippet to return the files under a given directory:
ls :: String -> IO [String]
ls dir = do
contents <- File.new dir >>= _.list
maybe (return []) (JArray.fold (flip (:)) []) contents
Regarding createTempFile, the following works for me:
frege> File.createTempFile "test.txt"
String -> STMutable RealWorld File
I am working on a project where our verification test scripts need to locate symbol addresses within the build of software being tested. This might be used for setting breakpoints or reading static data from memory. What I am after is to create a map file containing symbol names, base address in memory, and size. Our build outputs an ELF file which has the information I want. I've been trying to use the readelf, nm, and objdump tools to try and to gain the symbol addresses I need.
I originally tried readelf -s file.elf and that seemed to access some symbols, particularly those which were written in assembler. However, many of the symbols that I wanted were not in there - specifically those that originated within our Ada code.
I used readelf --debug-dump file.elf to dump all debug information. From that I do see all symbols, including those that were in the Ada code. However, the format seems to be in the DWARF format. Does anyone know why these symbols would not be output by readelf when I ask it to list the symbolic information? Perhaps there is simply an option I am missing.
Now I could go to the trouble of writing a custom DWARF parser to get the information but if I can get it using one of the Binutils (nm, readelf, objdump) then I'd really like prefer a standard solution.
DWARF is the debug information and tries to reflect the relation of the original source code. Taking following code as an example
static int one() {
// something
return 1;
}
int main(int ac, char **av) {
return one();
}
After you compile it using gcc -O3 -g, the static function one will be inlined into main. So when you use readelf -s, you will never see the symbol one. However, when you use readelf --debug-dump, you can see one is a function which is inlined.
So, in this example, compiler does not prohibit you use optimization with -g, so you can still debug the executable. In that example, even the function is optimized and inlined, gdb still can use DWARF information to know the function and source/line from current code block inside inlined function.
Above is just a case of compiler optimization. There might be plenty of reasons that could lead to mismatch symbols address between readelf -s and DWARF.
I've written an octave script, hello.m, which calls subfunc.m, and which takes a single input file, a command line argument, data.txt, which it loads with load(argv(){1}).
If I put all three files in the same directory, and call it like
./hello.m data.txt
then all is well.
But if I've got another data.txt in another directory, and I want to run my script on it, and I call
../helloscript/hello.m data.txt
this fails because hello.m can't find subfunc.m.
If I call
octave --path "../helloscript" ../helloscript/hello.m data.txt
then that seems to work fine.
The problem is that if I don't have a data.txt in the directory, then the script will pick up any data.txt that is lying around in ../helloscript.
This seems a bit fragile. Is there any way to tell octave, preferably in the script itself, to get subfunctions from the same directory as the script, but to get everything else relative to the current directory.
The best robust solution I can think of at the moment is to inline the subfunction in the script, which is a bit nasty.
Is there a good way to do this, or is it just a thorny problem that will cause occasional hard to find problems and can't be avoided?
Is this in fact just a general problem with scripting languages that I've just never noticed before? How does e.g. python deal with it?
It seems like there should be some sort of library-load-path that can be set without altering the data-load-path.
Adding all your subfunctions to your program file is not nasty at all. Why would you think so? It is perfectly normal to have function definitions in your script. The only language I know that does not do this is Matlab but that's just braindead.
The other alternative you have is to check that the input file argument, data.txt exists. Like so:
fpath = argv (){1};
[info, err, msg] = stat (fpath);
if (err)
error ("could not stat `%s' : %s", fpath, msg);
endif
## continue your script knowing the file exists
But really, I would recommend you to use both. Add your subfunctions in your main program, the only reason to have it on separate file is if you plan on sharing with other programs, and always check input arguments.
Suppose I have the following code in Go:
foo, bar := someFunc(baz)
I would like to create a Vim function to check type of foo or bar when editing a file.
Is there any tool or reliable source of information for functions from Go's packages I could use or? As for the functions declared in the file I'm editing I was thinking about simply parsing all the functions declared in that file.
You are looking for something like godef
If the -t flag is given, the type of the expression will also be
printed. The -a flag causes all the public members (fields and
methods) of the expression, and their location, to be printed also;
the -A flag prints private members too.
I know it is being used by various vim and emacs scripts.
The Go Oracle does this and much more.
vim-go is a complete vim setup for Go.
It includes integration with the previously mentioned godef (as :GoDef) and Go oracle (as :GoImplements, :GoCallees, :GoReferrers, etc) as well as other tools.
I am writing a macro to check for cython on the system my program is about to be compiled.
i can use AC_PATH_PROG all right to find cython when it is in the path, but if the user want to specifiy it in the configure line like this:
./configure CYTHON=/home/user/cythonFoo
I just can't find the right way to check for it.
This is not working, it always pass the test whatever the value of CYTHON is:
AC_PATH_PROG( CYTHON, $CYTHON,"" )
This is kinda working, but not really usable, because it would require me to extract filename and filepath beforehand:
AC_PATH_PROG( CYTHON, cythonFoo,"", /home/user/ )
So i've wrote my own test, but i think there may be a standard way to do it
AC_MSG_CHECKING([Checking Cython path $CYTHON is correct])
AS_IF($CYTHON -V > /dev/null 2>&1, , CYTHON="")
if test -z $CYTHON; then
AC_MSG_RESULT([ no ])
else
AC_MSG_RESULT([ yes ])
fi
You're observing the expected behavior of AC_PATH_PROG. If the user sets CYTHON, AC_PATH_PROG is going to treat it as the cython to use, even if it's bogus. As the first line of the linked page states
If you need to check the behavior of a program as well as find out whether it is present, you have to write your own test for it
So what you've done is the "standard way".