Key binding to interactively execute commands from Python interpreter history in order? - keyboard-shortcuts

I sometimes test Python modules as I develop them by running a Python interactive prompt in a terminal, importing my new module and testing out the functionality. Of course, since my code is in development there are bugs, and frequent restarts of the interpreter are required. This isn't too painful when I've only executed a couple of interpreter lines before restarting: my key sequence when the interpreter restart looks like Up Up Enter Up Up Enter... but extrapolate it to 5 or more statements to be repeated and it gets seriously painful!
Of course I could put my test code into a script which I execute with python -i, but this is such a scratch activity that it doesn't seem quite "above threshold" for opening a text editor :) What I'm really pining for is the Ctrl-r behaviour from the bash shell: executing a sequence of 10 commands in sequence in bash involves finding the command in history (repeated Up or Ctrl-r for a search -- both work in the Python interpreter shell) and then just pressing Ctrl-o ten times. One of my favourite bash shell features.
The problem is that while lots of other readline binding functionality like Ctrl-a, Ctrl-e, Ctrl-r, and Ctrl-s work in the Python interpreter, Ctrl-o does not. I've not been able to find any references to this online, although perhaps the readline module can be used to add this functionality to the python prompt. Any suggestions?
Edit: Yes, I know that using the interactive interpreter is not a development methodology that scales beyond a few lines! But it is convenient for small tests, and IMO the interactiveness can help to work out whether a developing API is natural and convenient, or too heavy. So please confine the answers to the technical question of whether readline history-stepping can be made to work in python, rather than the side-opinion of whether one should or shouldn't choose to (sometimes) work this way!
Edit: Since posting I realised that I am already using the readline module to make some Python interpreter history functions work. But the Ctrl-o binding to the operate-and-get-next readline command doesn't seem to be supported, even if I put readline.parse_and_bind("Control-o: operate-and-get-next") in my PYTHONSTARTUP file.

I often test Python modules as I develop them by running a Python interactive prompt in a terminal, importing my new module and testing out the functionality.
Stop using this pattern and start writing your test code in a file and your life will be much easier.
No matter what, running that file will be less trouble.
If you make the checks automatic rather than reading the results, it will be quicker and less error-prone to check your code.
You can save that file when you're done and run it whenever you change your code or environment.
You can perform metrics on the tests, like making sure you don't have parts of your code you didn't test.
Are you familiar with the unittest module?

Answering my own question, after some discussion on the python-ideas list: despite contradictory information in some readline documentation it seems that the operate-and-get-next function is in fact defined as a bash extension to readline, not by core readline.
So that's why Ctrl-o neither behaves as hoped by default when importing the readline module in a Python interpreter session, nor when attempting to manually force this binding: the function doesn't exist in the readline library to be bound.
A Google search reveals https://bugs.launchpad.net/ipython/+bug/382638, on which the GNU readline maintainer gives reasons for not adding this functionality to core readline and says that it should be implemented by the calling application. He also says "its implementation is not complicated", although it's not obvious to me how (or whether it's even possible) to do this as a pure Python extension to the readline module behaviour.
So no, this is not possible at the moment, unless the operate-and-get-next function from bash is explicitly implemented in the Python readline module or in the interpreter itself.

This isn't exactly an answer to your question, but if that is your development style you might want to look at DreamPie. It is a GUI wrapper for the Python terminal that provides various handy shortcuts. One of these is the ability to drag-select across the interpreter display and copy only the code (not the output). You can then paste this code in and run it again. I find this handy for the type of workflow you describe.

Your best bet will be to check that project : http://ipython.org
This is an example with a history search with Ctrl+R :
EDIT
If you are running debian or derivated :
sudo apt-get install ipython

Related

Perl 6: automated testing of terminal-based programs

How could I automate interactions with command line programs that expose a text terminal interface with Perl 6 for testing purposes?
If you want to use Perl 6 to automate execution or testing of console applications, I think you're going to use NativeCall to interact with the expect library. Once expect is installed, man libexpect will show its API documentation, though the way of accessing the documentation (such as the manpage name) may differ per package distribution.
Expect has APIs to launch a program, wait for text to appear on the (emulated) console (to "expect" text), and send text to the console (to emulate typing). The most common use case is to automate programs which require password input. Expect is often scripted--it is an interpreter--but there's no reason not to use it from a higher level programming language.
Edit: I somewhat answered the wrong question. The OP is interested in testing Perl 6 modules with Perl 6. That said, using expect to launch a second Perl 6 interpreter which uses the module is still the strongest, most strict way to test the application. You don't need to know what type of terminal library the module uses, because expect should be compatible with nearly all of them. You can send text to the STDIN pipe of a subprocess, but that's not as strong as the subprocess (console) communication you can get from expect. I don't know if there's a way to hijack whichever terminal library the module uses and communicate with it directly.
If it's just a plain interface, you could just run the program and collect output. The currently-experimental Testo module has is-run routine. You could use that directly, or if experimental status is bothersome, copy the guts of it into your own helper routine.
Take a look at Sparrow6 Task Check Language - Perl6 based DSL to verify text output. I've done a lot terminal apps testing using it.

Show me your ID(E)!

I often work on very small pieces of code, on the order of max 100 lines, especially in scenarios when I learn something new and just play with the code, or when I debug.
Because I frequently change code and want to see how that changes the contents of my variables and output, it is tedious to either
1) hit the debug button, wait for the debugger to start (in my case I use PyCharm as an IDE) and then inspect the output
or
2) insert some prints for the variables that I want to observe and compile the code (slightly faster than starting the debugger).
To eliminate this time consuming workflow, where I constantly hit the compile or debug button every few seconds, is there an IDE where I can set a watch to a few variables and then each time I change in my source code a single character (or, alternatively, every half a second) the IDE automatically compiles everything and I will see then new values of my variables?
(Of course while I intermediatelychange the code the compilation will give errors, but that is ok. This feature would be a big time saver. Maybe PyCharm has it already implemented? If not, ideally I would hope for a language agnostic IDE, similar to PyCharm, where variants for Java etc. also exist. If not, since I code in Python, a Python IDE would also be great.)
This might not be exactly what you are looking for but PyCharm (and IntelliJ and probably others) can run tests automatically when code changes.
In the PyCharm Run toolbar look for "Toggle auto-test" button.
For example in PyCharm you can create test cases that just runs the code you're interested in and prints the variables you need.
Then create a run configuration that runs only those tests and set it to run automatically.
For more details see PyCharm documentation on rerunning tests.
The Scala plugin for IntelliJ has exactly what you need in the form of "worksheets," where every expression is automatically recompiled when its value or the value of anything it references is changed.
Since (based on your usage of PyCharm), I assume you're using Python primarily, I think Jupyter notebook is your best bet. Jupyter is language agnostic but began as specific to python (it was called IPython notebook for this reason).
I have not tried it, but this guide purports to show to get Jupyter to work with PyCharm
EDIT: Here is another possibility called vim worksheet; I haven't tried it, but it purports to do the same thing as Scala worksheets, but in vim, and for a number of languages, including Python.
The python Spyder IDE (comes with Anaconda) has this feature. When you hit run, you can see all of the variables at the top right of the screen and you can click on them to see their values (this is very helpful with Numpy Arrays too!).
If your interest is in the actual workflow improvement:
I used to program like you, looking at what my variables changed to, and design or debug my code based on such modifications, however is way to inefficient and costly to set what variables to watch over and over again and besides when if it bugs, you have to go all over again for the debugging process.
I changed my design process to better my workflow and adopted Test Driven Development (TDD), with it you can look at tools for you specific implementations or IDEs but the principles and workflow stay with you, with it you stop looking on how the variables changed and instead focus on what the functions should do, meaning faster iteration (with real time tools for testing), easier debugging and far more better, safe refactoring.
My favorite tool for it is Cucumber, and agnostic tool (for IDE or programming language) which help you test your code scenarios and at the same time documenting your features.
Hope it helps, i know its a very opinionated answer but it's an honest advices for improvement in ones workflow.
You should try Thonny. It is developed by Institute of Computer Science of University of Tartu.
The 4 features which might be of help to you are below (verbatim from the website):
No-hassle variables.
Once you're done with hello-worlds, select View → Variables and see how your programs and shell commands affect Python variables.
Simple debugger.
Just press Ctrl+F5 instead of F5 and you can run your programs step-by-step, no breakpoints needed. Press F6 for a big step and F7 for a small step. Steps follow program structure, not just code lines.
Stepping through statements
Step through expression evaluation. If you use small steps, then you can even see how Python evaluates your expressions. You can think of this light-blue box as a piece of paper where Python replaces subexpressions with their values, piece-by-piece.
Visualization of expression evaluation
Faithful representation of function calls.
Stepping into a function call opens a new window with separate local variables table and code pointer. Good understanding of how function calls work is especially important for understanding recursion.

Faster way of testing your prolog program

I am new to Prolog, and the task of launching the prolog interpreter from the terminal, typing consult('some_prolog_program.pl'), and then testing the predicate you just wrote is very time consuming, is there a way to run a scripted test to speed up development?
For example in C I can write a main where I would use the functions I defined, I can then execute:
make && ./a.out
to test the code, can I do something similar with Prolog?
You can have the interpreter always open and then recompile the file.
You can auto-run a predicate after compiling the file:
:- foo(4,2).
This will run foo(4,2) when the line is encountered in the file.
There are flags that can be used while launching (most) Prolog interpreters that allow you to compile a file and run predicates (check the man page). This way you could make a Bash script. The following will consult file.pl and run foo/0 using SWI-Prolog:
#!/bin/sh
exec swipl -q -f none -g "load_files([file],[silent(true)])" \
-t foo -- $*
This predicate will unify Arguments with a list of the flags you gave at the command line:
current_prolog_flag(argv, Arguments)
But unless you are going to run a lot of tests, I don't think that writing all this extra code will be faster.
Personally I really like the flexibility of testing any predicate at any time with or without tracing (see trace/0) without having to write extra code to call them (unlike in C).
P.S. about reloading the file without leaving the interpreter: You might have some problem if you have used dynamic predicates or global variables; you will have to do some cleaning.
You can invoke a test file from the command-line with prolog +l <file>
Also, you can build a single run_tests predicate that exercises a series of calls and validates the actual results against expected results. Here's an article with a good worked-out example: http://kenegozi.com/blog/2008/07/24/unit-testing-in-prolog
In SWI, you can load things as usual. Then, when you edit your files you simply say make. on the toplevel and it checks all dependencies automatically and only reloads the modified files.
For bigger projects it does make a lot of sense to use makefiles. In particular to do unit testing. See SWI's package plunit.
For simple scripts in SWI-Prolog, using REPL to test the code manually is usually good enough. Changed files can be reloaded via make/0 (?- make. on toplevel). Just keep the Prolog REPL running while editing, then save the edits, run make. in the REPL and hit ↑, ↑, Enter to execute the last query before the make. from history.
The main benefit of REPL is its interactivity:
You may fiddle with the arguments.
Transition to debugging or tracing (both command line and graphical) is easy.
You don't need to perform I/O to print the result. Output is handled by the toplevel, which prints the substitution. You see the whole substitution, not only its part you just happen to print (possibly accidentally overlooking other parts).
You may interactively choose how many substitutions you want to see for a goal that succeeds multiple times.
It is obvious if there is a choice point left after the last result returned by a non-deterministic predicate, which is hard to observe otherwise. In that case, false. is printed when backtracking beyond the last result.
If you need to preserve the test calls to repeat them later, create a protocol (transcript or "log" of the interactive session) and edit it to become a script, or even a test suite (see below). The protocol is a plain text file with escape sequences for the terminal, containing a verbatim copy of what you see during the interactive session. View the protocol using cat protocol.txt on Linux (and other *NIXes) or type protocol.txt on Windows.
If interactivity is not needed, perform the test calls from the command line non-interactively. Let's test the CLP(FD) factorial example n_factorial/2, saved in factorial.pl (don't forget to add :- use_module(library(clpfd)). when copying the code):
$ swipl -q -t "between(0, 9, N), n_factorial(N, F), format('~D ', F), fail." factorial.pl
1 1 2 6 24 120 720 5,040 40,320 362,880
On Windows, you may need to specify full path to swipl.exe as it's not in the PATH, probably.
If the call is always the same, you may save it to a shell script or Makefile (run would be a good name for the target).
In your current workflow for testing functions in C, you create a new program and call the function under test from its entry point (main function). Prolog scripts can have an entry point, too. See library(main). Prolog does not require compilation, so you can just directly call the script (./test.pl) without calling Make first.
For larger projects, you may want to create a less ad-hoc test suite. A unit testing framework like PlUnit is needed. Its use is beyond the scope of this answer; see the documentation.

Existing solutions to test a NSIS script

Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.

What is everything involved from typing in code to executing a program?

I realized, when just asking a question, I don't understand all the components that are part of the coding process.
This seems a silly question, but I can't find a definitive answer on Google, Wiki, nothing.
What exactly are all the parts called, and how do they work and intertwine? I'm talking whatever you type code into, whatever checks that for errors, compiles it, and runs it.
I'd appreciate any links, repeats, etc. I apologize for such a bland, stupid question.
EDIT: Well, I'm trying to start Perl, so anything about Perl would help. Like, how to use Notepad++ and eventually compile Perl.
Write code
Run code in one of two ways[*]
Compiled languages (C, C++, D, Java, C#)
Compile the code into an executable file with the compiler tool.
Run the executable
Interpreted languages (Perl Python Ruby Lua Haskell Lisp & more)
Run the code in an interpreter, e.g., perl foo.pl
Debug code.
edit: Since the question was refined to be the Perl development cycle...
You will need an editor and a 'shell', which is used to command the system with. In particular, you want a 'command-line interface'. On Windows, you start this with running cmd.exe on the Run dialog (Windows + R is the shortcut).
You see a strange black and white box with a blinking cursor, reminding you of ancient systems redolent of gurus and wizards. You panic and refer to Google, getting a web page. Finding the command to change directory and list files is recommended...
Upon arriving at the directory where you stored your Perl file, you issue the command perl myfilename.pl, where myfile.pl is the file you saved. As is common for programming, you find some errors that appear to be incomprehensible, and you refer to Stackoverflow.com once again...
* I have elided, glossed, and moved past many of the details, as this is an introductory question. A full discussion is known as "senior-level course on compilers".