Telnet Automation with Expect: Slow authentication? - authentication

I'm sending a command to a Mikrotik router using Telnet.
telnet 192.168.100.100 -l admin
Password: pass1234
[admin#ZYMMA] > /interface pppoe-server remove [find user=aspeed13]
[admin#ZYMMA] > quit
It works fine.
Now I want to automate it using expect tcl script:
#!/usr/bin/expect --
spawn telnet 192.168.100.100
expect "Login:"
send "admin\r"
expect "Password:"
send "pass1234\r"
expect "\[admin#ZYMMA\] >"
send "/interface pppoe-server remove \[find user=aspeed13\]\r"
expect "\[admin#ZYMMA\] >"
send "quit\r"
It works, but after authentication (line 6: send "pass1234\r") when the Router CLI is loading it freezes for ~10seconds with the following characters ^[[?6c^[[24;3R
Then the scripts runs ok.
My question is why Telnet loads fast when accessed manually and it takes too much time when accessed via expect script? I read in forums about telnet automation they say telnet is slow, but since manually it's too fast why it takes time to load with expect?

What you're seeing is blow-back from terminal negotiation, which is because you're not running in a real terminal. (Strictly, you are – that's expect's magic – but it's not behaving as a normal terminal does.)
The easiest fix is to set the terminal to something else before spawning the telnet session, e.g.:
#!/usr/bin/expect --
set env(TERM) dumb
spawn telnet 192.168.100.100
# Rest of your script goes here ...
Alternatively, you could try to respond correctly to the request to enter VT102 mode and the report of the cursor location (which feels like a lot of work) or you could rewrite your code so that it does everything inside interact (which connects the other end with the real terminal that you're running inside). But if setting an environment variable fixes it, why go to all that extra hassle?
(NB: I suggest setting the terminal to dumb here, but the key is that you want the stupidest terminal that works. Dumb terminals are ideal, because they're just about totally stupid, making it easy to pretend to be them…)

My answer is possibly too late. This is "Telnet autoconfig command"...I was this problem and found at Mikrotik Wiki this solution:
Add +t after login name. This switch autodetect to off.
Example:
send "admin+t\r"
It is works great and not "wait cca 10 sec" after login by expect.
There is link to Mikrotik WiKi help with more "switches":
http://wiki.mikrotik.com/wiki/Manual:Console_login_process#FAQ
P.S.: Sorry for my English.

Did you try with netcat, with telnet emulation enabled?

A little bit late to answer.
But if you want to speed up your character input with expect.
Try to generate the script with "autoexpect" command, which will save the
interaction in a file named "script.exp" in the same directory from where
you ran the command.
For instance:
cd $HOME
autoexpect telnet 192.168.100.100
# some more telnet commands here
exit
All the above commands will be saved in ~/script.exp
About Tcl, I don't know if ths script can be ran via tcl.

Related

How do I run multiple configuration commands in Dell EMC OS10 with Paramiko?

I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.
I want to run the following
# configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
However, I can't seem to figure out how to achieve this with paramiko.SSHClient().
When I try to use sshclient.exec_command("show vlan") it works great, it runs this command and exits. However, I don't know how to run more than one command with a single exec_command.
If I run sshclient.exec_command("configure") to access the configuration shell, the command completes and I believe the channel is closed, since my next command sshclient.exec_command("interface vlan ...") is not successful since the switch is no longer in configure mode.
If there is a way to establish a persistent channel with exec_command that would be ideal.
Instead I have resorted to a function as follows
chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
Oddly, this works when I run it from a Python terminal one command at a time.
However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?
Please advise if there is a more reasonable way to do this
Regarding sending commands to the configure mode started with SSHClient.exec_commmand, see:
Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko
Though it's quite common that "devices" do not support the "exec" channel at all:
Executing command using Paramiko exec_command on device is not working
Regarding your problem with invoke_shell, it's quite possible that the server needs some time to get ready for the next command.
Quick-and-dirty solution is to "sleep" shortly between the individual send calls.
Better solution to is to wait for command prompt before sending the next command.

Erlang client set ssh key

I'm using the :client API to connect to an external node and use code there remotely, the thing though is that I'm using Dokku for deployment and it would be really nice if I could specify a ssh key at runtime.
Right now my code looks something like this:
def start(host) do
allow_boot to_char_list(host)
{:ok, slave} = :slave.start(to_char_list(host), :slave, inet_loader_args)
load_paths(slave)
{:ok, slave}
end
inet_loader_args == ' -rsh ssh -loader inet -hosts #{master_node_ip} -setcookie #{:erlang.get_cookie}'
I've tried something like setting the -rsh argument to be "-rsh ssh -i /path/to/id_rsh" but it seems to ignore this entirely, I'm not exactly sure how it's implemented and the Erlang docs for :client are a little hard to understand for me (I can see it uses :ssh underneath somewhere, and that can take a "user_dir" argument which can contain a key file, but I'm not sure how to set that from :client)
Any ideas?
The -rsh option is intended to point to a different executable:
%% Alternative, if the master was started as
%% 'erl -sname xxx -rsh my_rsh...', then 'my_rsh' will be used instead
%% of 'rsh' (this is useful for systems where the rsh program is named
%% 'remsh').
These days people use ssh instead of rsh. (About 10 years ago the security team on a previous job required ssh even when both machines are on the same isolated network.) Since the command line interface is compatible, just pointing to a new executable generally works once you have the keys set up properly. So it makes sense to use the -rsh option to point to ssh instead.
It also seem logical that the argument could be used to pass other parameters to the ssh command as you attempted. However, the code assumes the string passed is the name of an executable in your PATH. It uses os:find_executable to look for an executable and ssh -i /path/to/id_rsh probably doesn't exist.
However, you can take advantage of this feature to point to any executable including a shell script. For instance, you could write a ssh-wrapper that looks something like:
#!/usr/bin/env ksh
exec ssh -i /path/to/id_rsh $#
Then use -rsh /path/to/my/ssh-wrapper so that :slave.start uses your wrapper with the proper ssh options specified. I've found the wrapper technique also makes future maintenance easier as the connection logic stays in one place.
Hat tip to this comment from Martin S.

Why does `matlabpool` under ssh need X forwarding?

I just encountered and circumvented a problem in Matlab, but I'm still wondering why this happens, and I also want to leave the information here for future reference.
In Matlab's Parallel Computing Toolbox, the command matlabpool local starts a local pool of Matlab workers which are then used transparently to speed up commands like parfor by distributing processing onto the different CPU cores. I tried to do so on a Linux machine which I connected to through ssh from my home Linux computer. I used ssh without X forwarding because the script I wanted to run only computes and saves the result, but does not produce graphical output.
The problem: matlabpool hung forever, without any indication of the cause. I restarted the remote machine, restarted Matlab, checked for license problems, without result.
The problem was resolved however when I closed ssh and logged back in, this time including the -X option for X11 forwarding – even though then I started Matlab with the -nodesktop option.
Does anyone have an idea why matlabpool on Linux appears to depend on access to X11?
Even though matlabpool starts and communicates with background headless workers, you can still create figures and plots and print/export them as images inside the parfor parallel loop. The following is a valid use case:
matlabpool(..)
parfor i=1:4
plot(..)
print(...)
close(..)
end
To me this suggests that background workers will still depend on graphics capabilities to generate the invisible plots in memory (maybe it's using virtual framebuffers and such). Of course this is just speculation on my part :)
EDIT:
Just to be sure here, can you try the following sequence of commands:
[client]$ ssh -v -x user#server # X11 forwarding disabled
[server]$ unset DISPLAY # clear $DISPLAY variable
[server]$ nohup matlab -nodesktop -nodisplay -noFigureWindows -nosplash \
-r "myscript; quit;" 2>&1 < /dev/null &
Where the script contains a simple parallel test like:
myscript.m
parpool('local',2)
smpd
fprintf('hello from lab#%d', labindex);
end
delete(gcp('nocreate'))
If MATLAB still hangs, try adding the -debug start option:
matlab -debug starts MATLAB and displays debugging information that can be useful, especially for X based problems.

Telnet with Bad Request 400

Have a look at the following post:
How to telnet google using command prompt?
I've tried the same thing, but keep getting a Bad Request! (400!) I'm working on a Windows 8 PRO machine. I just want to try a few things using Telnet, but as long as I keep getting this 400-error I can't really achieve much!
All I'm doing is the following:
o www.google.com 80 (PRESSING ENTER TWICE!!!)
GET / HTTP/1.1 (ENTER)
Host: www.google.com (PRESSING ENTER TWICE!!!)
Any help appreciated!
This problem can be solved by typing in the Telnet Commands exactly, so capitalize where needed and vice versa! Check this source for more detailed information on how to setup Telnet as a Instant HTTP Client. The source also explains that once you use a BACKSPACE to retype a command that the server receiving the command may interpret it as
<bs>
and if so, declares it as an illegal request! (This is what happened to me!)
Conclusion
It seems that you can communicate the Backspace-character properly if you have the host and client communicating properly! There's an article here that explains more about it on a technical level. To get this to work for the Windows Telnet Client, I do not know how and I'm not sure whether its possible! To get around this I would like to suggest using a program like PuTTY which is a free (MIT-licensed) Win32 Telnet and SSH client. There's an option available in the PuTTY client that allows you to change how the Backspace is generated in PuTTY, that is, which one is acceptable to you're host (if at all!)!
Please read the documentation section 4.4.1 for configuring this option "properly" (if all hosts are using this protocol; otherwise you probably need to read this article and somehow configure PuTTY to be accepted by you're host or vice versa!)!
Also, in the previous example I used Google which may need other parameters to get that working, but this may not have been the best choice to get a 200-status code immediately! Try bing.com instead (working for me at the moment!)!
o www.bing.com 80 (press ENTER twice!!!)
GET / HTTP/1.1 (press ENTER)
Host: www.bing.com (press ENTER twice!!!)

How to write a wrapper - avoiding a subshell - script for apache2 rewrite maps or shell scripts in general?

I am running a rewrite map with an external rewrite program (prg) in apache2 that may produce an error and die.
When the rewrite map is not running any more the system obviously doesn't function properly.
So I wanted to start a simple wrapper shell script that itself executes map program (which is written in php) and restarts it if it dies:
#!/bin/bash
until /usr/bin/php /somepath/mymap.php; do
echo "map died but i will restart it right away!"
done
If I try that in the shell by hand, it works fine, however it does not work when started by the webserver.
...and then communicates with the rewriting engine via its stdin and
stdout file-handles. For each map-function lookup it will receive the
key to lookup as a newline-terminated string on stdin. It then has to
give back the looked-up value as a newline-terminated string on stdout
or the four-character string ``NULL'' if it fails...
The reason seems pretty clear to me. The first script takes stdin but doesn't redirect it to the sub script.
I guess I somehow need to define a descriptor using exec and redirect stdin/stdout of the scripts properly. But how do I do that?
It's a common problem that some script works executed "by hand" and do not work when executed indirectly (via cron or from apache).
Usually the root cause is one of:
your script needs some extra environment varaible (PATH, LD_LIBRARY_PATH, etc.)
your script requires terminal
First thing to do is to grab some debug info, so add to your script:
env > /tmp/$0.env # get environment
and get stderr:
... /usr/bin/php /somepath/mymap.php 2>/tmp/$0.stderr ...
This may lead you to the solution.
If your script requires terminal and you cannot fix it you can run your script via gnu screen.
Good luck.
Michał Šrajer has given one very common cause of trouble (environment). You should certainly be sure that the environment is sufficiently well set up, because Apache sets its own environment rigorously and does not pass on any inherited junk values; it only passes what it is configured to pass (SetEnv and PassEnv directives, IIRC).
Another problem is that you think your mapping process will fail; that's worrying. Further, it is symptomatic of yet another problem, which I think is the main one.
The first time the map process is run, it will read the request from the web server, but if the mapping fails, you rerun it - but the web server is still waiting for the output from the original request, and the map process is waiting for the input, so there's an impasse.
Note that a child process automatically inherits the standard input and standard output of its parent unless you do something to change it.
If you think things might fail, you'll need to capture the standard input so that when you rerun the program, you can resupply the input (though why it would work on the second time when it failed on the first is a separate mystery).
Maybe:
if tee /tmp/xx.$$ | /usr/bin/php /somepath/mymap.php
then : OK
else
until /usr/bin/php /somepath/mymap.php < /tmp/xx.$$
do echo "map died but I will restart it right away!"
done
fi
rm -f /tmp/xx.$$
Unresolved issues include:
You should add traps to ensure that the temporary file is removed;
You should probably not use /tmp as the directory;
The messages probably do not get sent to the web server and from thence to the client browser until the overall script terminates, so the echoed messages simply mess up the start of the response;
There is no limit on the number of failures;
There is no logging of the failures;
There is no attempt to fix the problem that caused the failure;
And there are probably others I've not thought of yet.
until is not a valid keyword, you probably have it aliased and aliases do not work on scripts. my mistake, it seems it is.
regardless, this is what you want to do:
while true; do
/usr/bin/php /somepath/mymap.php
done
if this also fails then yes, either your program expects a terminal or you have something missing in your env.