I have a script which interacts with user (prints some questions to stderr and gets input from stdin) and then prints some data to stdin. I want to put the output of the script to a variable in vimscript. It probably should look like this:
let a = system("./script")
The supposed behavior is that script runs, interacts with user, and after all a is assigned with its output to stdout. But instead a is assigned both with outputs to stdout and stderr, so user seed no prompts.
Could you help me fixing it?
Interactive commands are best avoided from within Vim; especially with GVIM (on Windows), a new console window pops up; you may not have a fully functional terminal, ...
Better query any needed arguments in Vimscript itself (with input(); or pass them on from a custom Vim :command), and just use the external script non-interactively, feeding it everything it needs.
What gets captured by system() (as well as :!) is controlled by the 'shellredir' option. Its usual value, >%s 2>&1 captures stdout as well as stderr. Your script needs to choose one (e.g. stdout) for its output, and the other for user interaction, and the Vimscript wrapper that invokes it must (temporarily) change the option.
:let save_shellredir = &shellredir
:set shellredir=>
:let a = system('./script') " The script should interact via stderr.
:let &shellredir = save_shellredir
Call the script within the other as,
. ./script.sh
I think this is what you meant.
Related
I am using Python's Paramiko library to SSH a remote machine and fetch some output from command-line. I see a lot of junk printing along with the actual output. How to get rid of this?
chan1.send("ls\n")
output = chan1.recv(1024).decode("utf-8")
print(output)
[u'Last login: Wed Oct 21 18:08:53 2015 from 172.16.200.77\r', u'\x1b[2J\x1b[1;1H[local]cli#BENU>enable', u'[local]cli#BENU#Configure',
I want to eliminate, [2J\x1b[1;1H and u from the output. They are junk.
It's not a junk. These are ANSI escape codes that are normally interpreted by a terminal client to pretty print the output.
If the server is correctly configured, you get these only, when you use an interactive terminal, in other words, if you requested a pseudo terminal for the session (what you should not, if you are automating the session).
The Paramiko automatically requests the pseudo terminal, if you used the SSHClient.invoke_shell, as that is supposed to be used for implementing an interactive terminal. See also How do I start a shell without terminal emulation in Python Paramiko?
If you automate an execution of remote commands, you better use the SSHClient.exec_command, which does not allocate the pseudo terminal by default (unless you override by the get_pty=True argument).
stdin, stdout, stderr = client.exec_command('ls')
See also What is the difference between exec_command and send with invoke_shell() on Paramiko?
Or as a workaround, see How can I remove the ANSI escape sequences from a string in python.
Though that's rather a hack and might not be sufficient. You might have other problems with the interactive terminal, not only the escape sequences.
You particularly are probably not interested in the "Last login" message and command-prompt (cli#BENU>) either. You do not get these with the exec_command.
If you need to use the "shell" channel due to some specific requirements or limitations of the server, note that it is technically possible to use the "shell" channel without the pseudo terminal. But Paramiko SSHClient.invoke_shell does not allow that. Instead, you can create the "shell" channel manually. See Can I call Channel.invoke_shell() without calling Channel.get_pty() beforehand, when NOT using Channel.exec_command().
And finally the u is not a part of the actual string value (note that it's outside the quotes). It's an indication that the string value is in the Unicode encoding. You want that!
This is actually not junk. The u before the string indicates that this is a unicode string. The \x1b[2J\x1b[1;1H is an escape sequence. I don't know exactly what it is supposed to do, but it appears to clear the screen when I print it out.
To see what I mean, try this code:
for string in output:
print string
I workin' with Torch7 and Lua programming languages. I need a command that redirects the output of my console to a file, instead of printing it into my shell.
For example, in Linux, when you type:
$ ls > dir.txt
The system will print the output of the command "ls" to the file dir.txt, instead of printing it to the default output console.
I need a similar command for Lua. Does anyone know it?
[EDIT] An user suggests to me that this operation is called piping. So, the question should be: "How to make piping in Lua?"
[EDIT2] I would use this # command to do:
$ torch 'my_program' # printed_output.txt
Have a look here -> http://www.lua.org/pil/21.1.html
io.write seems to be what you are looking for.
Lua has no default function to create a file from the console output.
If your applications logs its output -which you're probably trying to do-, it will only be possible to do this by modifying the Lua C++ source code.
If your internal system has access to the output of the console, you could do something similar to this (and set it on a timer, so it runs every 25ms or so):
dumpoutput = function()
local file = io.write([path to file dump here], "w+")
for i, line in ipairs ([console output function]) do
file:write("\n"..line);
end
end
Note that the console output function has to store the output of the console in a table.
To clear the console at the end, just do os.execute( "cls" ).
I'm trying to implement a simple terminal GUI using bash's interactive mode. I successfully invoked bash, get its stdout and print everything to a text view. I forward the user input from the text view to bash's stdin, to be able to run commands. It works great, except I don't get any error messages.
However, when I proceeded to print bash's stderr to my text view, I noticed something strange. In addition to now receiving error messages, bash seems to pass everything from stdin to stderr. Because of this, every character I type is printed twice (once normally because I enter it, and once because I print everything from stderr).
It also seems to print the prompt via stderr (bash-3.2$). Is this the expected behavior? Can this be suppressed?
I also tried to just capture use input (and not let the user type directly into the text view) and rely on bash to print the user input. This is almost working, except the order of the output via stdout and stderr is random:
If I type a command like echo test and hit enter, sometimes I get this:
(the second test is the output, I didn't type testtest)
bash-3.2$ echo testtest
bash-3.2$
Sometimes I get:
bash-3.2$ echo test
bash-3.2$ test
The order in which I receive the final \n, the output and the next bash-3.2$ is obviously mixed up.
There is no way to read stdout and stderr in the "correct" order, because there is no notion of order between different pipes. But you can ensure that both are sent to the same pipe (i.e. same file descriptor) instead of having each one go to a separate pipe. To do that, look on the options of whatever you use to start the bash subprocess; or maybe start a command line like bash -c 'bash 2>&1'.
I have a shell script which asks for user input and depending on the input opens db connection using sqlplus and run some sql querys like drop table /create table/select/update. Is it possible that the sql part be run as background job,so that even if i lose vpn connectivity to the network,all the sql queries gets executed.
Also ,when the sql parts gets completed and user is prompted with another input the shell script comes to foreground and after getting the input again goes to background?
I have found some questions which tell us how to run the script in background,but i want to run ONLY some parts of the same script in background if possible(and come to foreground for user input).Though i can make multiple scripts too handle it(dividing the scripts in parts which needs to be called in background and calling them though another script),i would rather do it in a single script if possible.
You can break your main script up into functions / smaller scripts to achieve the desired behavior of a mix of background processes and foreground processes.
For example, in your main script:
#!/bin/sh
echo "Starting script..."
# do so more stuff here, maybe ask user for input
./run_background_process_1 &
# ask the user for some more input
./run_background_process_2 &
...
Use the & symbol at the end of script calls to denote that they should be run in the background.
(Updated) If you'd like to keep everything in 1 script, use functions to break up / encapsulate the parts of logic that you would like to run in the background. Call these functions by suffixing the call with &, same as above.
You can try the following example to see that it works:
#!/bin/sh
hello() {
condition="yes"
while [[ $condition== "yes" ]]
do
echo "."
sleep 1
done
}
# Script main starts here
echo "Start"
hello &
echo "Finish"
Remove the & after hello and you'll see that it behaves differently.
There are tools which allow you to keep scripts running despite loss of connection. For example, check out http://www.gnu.org/software/screen/ - one of its features is Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the users terminal.
After search on internet i found out i can use three methods to make the script background :
1) using bg: How do I put an already-running process under nohup? .but unfortunately ,this didnt worked for me in ksh shell.
2) using coprocesses
3) using nohup
I decided to go with nohup as it was easier to implement. The sqlplus part which needed to be run in background ,i made another script of it and called it from the main script using nohup
nohup script-name.ksh ${parameter1} ${paramter2} &
This worked for me.
It seems as if a script with #! prefix can have the interpreter name and ONLY one argument. Thus:
#!/bin/ls -l
works, but
#!/usr/bin/env ls -l
doesn't
Do you agree? Any thoughts?
Francesc
Different Unixes interpret #! differently. Here's a comprehensive-looking writeup: http://www.in-ulm.de/~mascheck/various/shebang/
It seems that the lowest common denominator across platforms is "the interpreter (which must not itself be a script) and no more than one argument".
Originally, we only had one shell on Unix. When you asked to run a command, the shell would attempt to invoke one of the exec() system calls on it. It the command was an executable, the exec would succeed and the command would run. If the exec() failed, the shell would not give up, instead it would try to interpret the command file as if it were a shell script.
Then unix got more shells and the situation became confused. Most folks would write scripts in one shell and type commands in another. And each shell had differing rules for feeding scripts to an interpreter.
This is when the “#! /” trick was invented. The idea was to let the kernel’s exec () system calls succeed with shell scripts. When the kernel tries to exec () a file, it looks at the first 4 bytes which represent an integer called a magic number. This tells the kernel if it should try to run the file or not. So “#! /” was added to magic numbers that the kernel knows and it was extended to actually be able to run shell scripts by itself. But some people could not type “#! /”, they kept leaving the space out. So the kernel was expended a bit again to allow “#!/” to work as a special 3 byte magic number.