How do I examine the tail of a growing file on WSL-2 - windows-subsystem-for-linux

On WSL tail -f file reports once and then does nothing even though the file is growing.
Is there a workaround short of writing my own?

Use:
tail -f ---disable-inotify file
Note the 3 hyphens before "disable..."
This parameter makes tail to use polling to detect file changes and know when to read more data, which is inefficient but works anywhere.
By default tail uses inotify, even on filesystems without inotify support like Windows NTFS.
Some more information about this story at https://github.com/microsoft/WSL/issues/925

Not sure why tail isn't properly updating but one workaround is that you can less the file then press shift+g to refresh it.

Related

how to remove/delete the random data from disk which was created using if=dev/urandom

I think I did some stupid thing by creating random data using some command dd if=/dev/urandom of=20GB.bin bs=1GB count=16 iflag=fullblock
in huge size. Actually I was testing the behaviors of something when disk is full.
However Now I wish to delete this. I deleted the dev/urandom folder hoping it will do something, but it seems nothing was deleted without making any difference.
I see some commands online like wipe and shred, However now my dev/urandom folder is deleted, so what exactly should I do now ?
Any kind of help will be great.
you saved the random numbers to 20GB.bin thus you can do rm 20GB.bin to remove it. /dev/urandom is a special file bound to a tool and does not save random files, but just creates them on the fly. Some other tools might depend on /dev/urandom so deleting this device file might let them crash.

How to disable Perl 6 REPL creating .precomp

Every time I run perl6 to enter the REPL mode, it creates a .precomp directory, which also slows down the appearance of the prompt. If the .precomp directory already exists, the prompt appears almost immediately, otherwise perl6 takes several seconds to create it.
Is there a way to disable this feature?
Check if you have a PERL6LIB environment variable set, and if it contains .. I can produce exactly the behavior you're encountering if I set that. The solution is to clear that from your PERL6LIB.

Ipython QtConsole %edit

When using the magic function %edit from QtConsole with IPython, the call does not block, and does not execute the saved code. It does however save a temporary file...
I think this is intended behavior due to GUI editors and uncertainty, and whatever that reason is for not being able to communicate with subprocess (pyZMQ?).
What do you suggest as the best way to mix %edit/%run magics?
I would not mind calling two different commands (one to edit, and one after I have saved and execution is safe). But those commands need a way to synchronize this target file location, or someone to persist storage, and probably need some crude form of predicatably generating filenames such that you can edit more than one file at a time, and execute in arbitrarily. Session persistence is not a must.
Would writing my own magic do any good? Hope we can %edit macros soon, that would do well enough to make it work.
you shoudl be able to do %edit filename.py and %run filename.py. The non blocking behavior is expected, and IIRC due to technical reason. Not unsurmountable but difficult.
You could define your own magic if you wish, improvement are welcomed.
Hope we can %edit macros soon, that would do well enough to make it work.
For that too, PR are welcomed. I guess as a workaround/option you can %load macro which would put macro on input n+1 , edit it and redefine it, that might be a good extension for a cell magic %%macro macroname
If you have some executable code on your input (from QtConsole), you can type
%edit 1-5
This fires the editor, creates a temporarily file (automatically managed), and loads your input lines. This is nearly enough, now how to retrieve the name of that temp file pragmatically?
I see the print statement on Stdout, but its not visible to QtConsole AFAIK. Could maybe redirect stdout to catch that line, but that may not be an option anyway if your doing something else with stdout.
If I could retrieve the full pathname that was just created, this would be cake. Store it where some magics will know how to find it. Then issue a followup command when ready,pops the name off the stack, loads it into a macro, and run. All this with 2 input commands and no names to remember (unless you want to find and use that macro again, but for 1 shot stuff...)
How do I catch or retrieve the path of that temporary file?

profile an awk command?

Probably a silly question, since awk commands are usually pretty compact and do just one or two operations...
Is there a way to profile and awk command? ie. if it uses gsub, split, sorting associative arrays, is there an easy way to find out which part is bogging down the whole operation?
EDIT: Specifically I am looking for executing time for each subcommand, not how many times it was called. is this possible?
From the gawk man page:
pgawk is the profiling version of gawk. It is identical in every way
to gawk, except that programs run more slowly, and it automatically
produces an execution profile in the file awkprof.out when done. See
the --profile option, below.
so the answer would be yes if you are using the GNU implementation.
And to forstall your next question, the man page goes on to say
dgawk is an awk debugger. Instead of running the program directly, it
loads the AWK source code and then prompts for debugging commands.
Unlike gawk and pgawk, dgawk only processes AWK program source provided
with the -f option. The debugger is documented in GAWK: Effective AWK
Programming.
There's an awk implementation with a debugger similar to gdb, called dgawk.
You say you want execution time for each subcommand.
Here's how I do it, regardless of language:
Give it enough workload so it runs long enough, and time it with a watch (N seconds).
Then do it again, and while it's running, hit Ctrl-C.
Do backtrace to examine the stack, and copy that into a text editor.
Do that several times, like 10.
Any subcommand will appear on the stack for the fraction of time it spends.
So if sort is taking 50% of the time (N/2 seconds), it will appear on about 5 of those samples.
This tells you about big time-takers, not little ones. I assume you are looking for the big ones.
(Some people say this isn't accurate, which is baloney. Sure the amount of time isn't very accurate - it doesn't need to be. The accuracy you need is in location - pinpointing where the problem is, and that's what it does.)
ADDED: You can almost do this with pgawk. If you run your program in profiling mode, each time you hit Ctrl-C (or whatever) it prints the call stack to the output file. The only problem is, it prints the function names but not what lines they are called from, which you might actually need.
Here is the fine documentation about profiling gawk.
Build a profiling version of gawk for gprof, or use the kernel-based oprofile. You can then see in a lot of detail how much time is spent in various internal functions in gawk in response to your script and its data. Functions like gsub and split map to functions inside gawk.
For instance gsub and other functions are handled by the do_sub function in this source file:
http://git.savannah.gnu.org/cgit/gawk.git/tree/builtin.c
So you would look for how much time is spent in do_sub.
You want to compile and link gawk with the -pg GCC option. Successful runs of the program will then dump a profiling file gmon.out from which gprof will produce a report.
I highly recommend oprofile also, but going into it little out of scope for this answer.

Faster way of testing your prolog program

I am new to Prolog, and the task of launching the prolog interpreter from the terminal, typing consult('some_prolog_program.pl'), and then testing the predicate you just wrote is very time consuming, is there a way to run a scripted test to speed up development?
For example in C I can write a main where I would use the functions I defined, I can then execute:
make && ./a.out
to test the code, can I do something similar with Prolog?
You can have the interpreter always open and then recompile the file.
You can auto-run a predicate after compiling the file:
:- foo(4,2).
This will run foo(4,2) when the line is encountered in the file.
There are flags that can be used while launching (most) Prolog interpreters that allow you to compile a file and run predicates (check the man page). This way you could make a Bash script. The following will consult file.pl and run foo/0 using SWI-Prolog:
#!/bin/sh
exec swipl -q -f none -g "load_files([file],[silent(true)])" \
-t foo -- $*
This predicate will unify Arguments with a list of the flags you gave at the command line:
current_prolog_flag(argv, Arguments)
But unless you are going to run a lot of tests, I don't think that writing all this extra code will be faster.
Personally I really like the flexibility of testing any predicate at any time with or without tracing (see trace/0) without having to write extra code to call them (unlike in C).
P.S. about reloading the file without leaving the interpreter: You might have some problem if you have used dynamic predicates or global variables; you will have to do some cleaning.
You can invoke a test file from the command-line with prolog +l <file>
Also, you can build a single run_tests predicate that exercises a series of calls and validates the actual results against expected results. Here's an article with a good worked-out example: http://kenegozi.com/blog/2008/07/24/unit-testing-in-prolog
In SWI, you can load things as usual. Then, when you edit your files you simply say make. on the toplevel and it checks all dependencies automatically and only reloads the modified files.
For bigger projects it does make a lot of sense to use makefiles. In particular to do unit testing. See SWI's package plunit.
For simple scripts in SWI-Prolog, using REPL to test the code manually is usually good enough. Changed files can be reloaded via make/0 (?- make. on toplevel). Just keep the Prolog REPL running while editing, then save the edits, run make. in the REPL and hit ↑, ↑, Enter to execute the last query before the make. from history.
The main benefit of REPL is its interactivity:
You may fiddle with the arguments.
Transition to debugging or tracing (both command line and graphical) is easy.
You don't need to perform I/O to print the result. Output is handled by the toplevel, which prints the substitution. You see the whole substitution, not only its part you just happen to print (possibly accidentally overlooking other parts).
You may interactively choose how many substitutions you want to see for a goal that succeeds multiple times.
It is obvious if there is a choice point left after the last result returned by a non-deterministic predicate, which is hard to observe otherwise. In that case, false. is printed when backtracking beyond the last result.
If you need to preserve the test calls to repeat them later, create a protocol (transcript or "log" of the interactive session) and edit it to become a script, or even a test suite (see below). The protocol is a plain text file with escape sequences for the terminal, containing a verbatim copy of what you see during the interactive session. View the protocol using cat protocol.txt on Linux (and other *NIXes) or type protocol.txt on Windows.
If interactivity is not needed, perform the test calls from the command line non-interactively. Let's test the CLP(FD) factorial example n_factorial/2, saved in factorial.pl (don't forget to add :- use_module(library(clpfd)). when copying the code):
$ swipl -q -t "between(0, 9, N), n_factorial(N, F), format('~D ', F), fail." factorial.pl
1 1 2 6 24 120 720 5,040 40,320 362,880
On Windows, you may need to specify full path to swipl.exe as it's not in the PATH, probably.
If the call is always the same, you may save it to a shell script or Makefile (run would be a good name for the target).
In your current workflow for testing functions in C, you create a new program and call the function under test from its entry point (main function). Prolog scripts can have an entry point, too. See library(main). Prolog does not require compilation, so you can just directly call the script (./test.pl) without calling Make first.
For larger projects, you may want to create a less ad-hoc test suite. A unit testing framework like PlUnit is needed. Its use is beyond the scope of this answer; see the documentation.