Use ssh script return value in Jenkins - ssh

We're deploying our application using SSH scripts. For the production stage we need to figure out which out of two clusters is currently active. This can only be achieved reliably by running a command on a remote host and interpreting its output. Unfortunately there's no SSH plugin that does that AFAIK.
They only seem to be able to interpret if the SSH script return value was different from zero.
Currently I only see two undesirable solutions:
use SSH in a script like Python, Groovy, etc. (means, we would have to provide SSH authentication to it somehow)
Let the SSH-command write to a file, that is then copied to Jenkins and interpreted there (unelegant and cumbersome)

Ok based on what you mentioned in the comment, I think you can try something like given in here and then copy back that file to jenkins using ftp and then read the file contents.
Or you can have the whole process orchestrated in an Ant script by using SSHExec task and get the output in Ant

Related

Ansible: to how make Paramiko use ~/.ssh/config?

Ideally, of course, I'd like Ansible to completely take care of this.
If this is not possible (why?!), then, at least, I want to be able to extract ~/.ssh/config contents into some other format and then make Ansible feed this to Paramiko. I am sure I'm not the first one faced with this task, so what's the accepted way of doing this?
I need this in order to use authorized_keys module to turn on passwordless authentication.
Btw, I wish Ansible emitted some warning when falling back to non-default backend (like Paramiko). I lost a couple of hours yesterday and actually had to download Ansible sources to figure out why perfectly running Ansible command suddenly stopped running when adding -k / --ask-pass option (yes, I am completely new to Ansible).
You can define this configuration in the Ansible configuration ini file or environment variables -- specifically the section for ANSIBLE_SSH_ARGS.

use other command instead of ssh for ansible

I have an ansible configuration which I know works on my local machines. However, I'm trying to now set it up on my company's machines which use a wrapper command similar to ssh (let's call it 'myssh')
for example, to access these machines, instead of writing
ssh myuser#123.123.123.123
you write
myssh myuser#123.123.123.123
which ends up calling ssh, among other things.
My question is, is there a way to swap which command ansible uses for accessing machines?
You can create a Connection Type Plugin to archive this. Looking at the ssh plugin, it appears like it might be as easy as replacing the ssh_cmd in line 333. Also specify myssh in line 69.
See here where to place the modified file. Additionally to that information, you can specify a custom location and let Ansible know about it in connection_plugins setting in ansible.cfg.
Finally again in your ansible.cfg set the transport setting to your new plugin:
transport = myssh
PS: I have never done anything like that before. This is only info from the docs.

WinSCP: Current SFTP-3 session does not support command you request. Separate shell session may be opened to process the command

I'm using WinSCP to interact with a remote server that supports only SFTP and doesn't allow SSH access.
My interaction involves moving/deleting a subset of files (identified by file names) in a certain directory.
To simplify this, I would typically synchronize [ Remote -> Local ], delete the files locally using the cygwin commandline (so that I can specify a list of file names instead of selecting files in the GUI) and then synchronize [ Local -> Remote ] to push the deletes to remote.
But, now, I want to further simplify the process so I can hand this over to an operations person. I went looking and was delighted to find that WinSCP supports 'commands'.
It would be great if I could enter something like this in the 'Command' field at the bottom in the 'Commander' view of WinSCP:
get queue-queue-from-DLQ-ID-69703273-db51-11e1-ba9f-005056010165 \
queue-queue-from-DLQ-ID-3d64697a-db51-11e1-b86e-005056010166 \
queue-queue-from-DLQ-ID-76fdb365-db50-11e1-b78d-005056010164 \
queue-queue-from-DLQ-ID-76ed3836-db50-11e1-ba9f-005056010165
But when I enter this in the 'Command' field, I get the following error:
Current SFTP-3 session does not support command you request. Separate shell session may be opened to process the command. Do you want to open separate shell session?
When I hit ok, I get the following error:
Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).
The latter one is probably due to the fact that SSH is not supported.
But my question is, since get is an SFTP command, why am I getting the first error? Doesn't WinSCP itself use that command under the covers to support a GUI 'copy to local' operation?
How can I configure either WinSCP or the Linux box so that I can do what I have shown above?
I guess this answers my question: http://winscp.net/eng/docs/remote_command
Apparently, the 'Command' feature is only supported for SCP.
I wonder why WinSCP can't expose a commandline interface for SFTP operations that are generally supported during an sftp interactive session.
You can use WinSCP command-line scripting interface to run the get command.
https://winscp.net/eng/docs/scripting
The 'Commands' feature (remote commands execution) is supported even for SFTP protocol. But this feature executes the command on remote server. You cannot use this feature to automate WinSCP. And there's no remote command that you can easily use to download file.
See https://winscp.net/eng/docs/remote_command

Allowing a PHP script to ssh, using sudo

I need to allow a PHP script on my local web server, to SSH to another machine to perform a specified task on some files. My httpd runs as _www with low permissions, so setting up direct passwordless SSH is difficult, not to say ill-advised.
The way I do it now is to have a minimal PHP script that sudo-exec's (as me) a shell script which is outside of the document root. The shell script in turn calls (as me) the PHP code that does the actual SSH work, and prints its output. Here's the code.
read_remote_files.php (The script I call from my browser):
exec('sudo -u me -n /home/me/run_php.sh /path/to/my_prog.php', $results);
print $results;
/home/me/run_php.sh (Runs as me, calls whatever it's given):
php $1 2>&1
sudoers:
_www ALL = (me) NOPASSWD: /home/me/run_php.sh
This all works, as my_prog.php is called as me and can SSH as me. It seems it's not too insecure since run_php.sh can't be called directly from a browser (outside document root). The issue I'm having is that my_prog.php isn't called as an HTTP program so doesn't have access to the HTTP environment variables (DOCUMENT_ROOT etc).
Two questions:
Am I making this too complicated?
Is there an easy way for my final script to get the HTTP variables?
Thanks!
Andy
Many systems do stuff like this using a (privileged) cron job that frequently checks for the existence of a file, a database record or some other resource, and then performs actions if there are any.
The huge advantage of this is that there is no direct interaction between the PHP script and the privileged script at all. The PHP script leaves the instructions in a resource, the privileged script fetches it. As long as the instructions can't lead to the system getting compromised or damaged, it's definitely more secure than sudoing.
The disadvantage is that you can't push changes whenever you like; you have to wait until the cron job runs again. But maybe it's an option anyway?
"I need to allow a PHP script on my local web server, to SSH to another machine to perform a specified task on some files."
I think that you are phrasing this in terms of a solution that you have difficulty in getting to work rather than a requirement. Surely what you should be saying is "I want to invoke a task on machine B from a PHP script running under Apache on Machine A." And then research solutions to this -- to which there are many from a simple 'roll-your-own' RPC tunnelled over HTTP(S) to using an XMLRPC or SOA framework.
Two caveats:
Do a phpinfo(); on both machines to check what extensions are available and
Also check your php.ini setting to make sure that your service provider hasn't disabled any functions that you expect to use (or do a Q&D script to echo 'disable_functions = ' . ini_get('disable_functions') . "\n"; ...)
If you browse here and the wider internet you'll find many examples. Here is one that I use for a similar purpose.

How can I remotely log on to a machine, execute a script which sets up an environment, then accept user input?

I've been trying to figure out a way to do this for a few hours now, and am having no luck.
I have a large environment file that I have saved as a ksh script. This script works perfect if I type . ./setEnv.sh
However, what I'm trying to do is use either ssh or rsh to log on to a remote system, execute this script, then allow me to use the system in it's modified form. I am able to successfully execute the script, but the connection always closes after execution. I would like to be able to keep this connection open.
Any idea on how I can do this?
At the moment, it does not matter if I use SSH or RSH to accomplish this. RSH is preferable. I am using a variety of Linux and Solaris operating systems, so a catch-all method would be nice.
Thanks,
Matt
Couldn't you do something like that ?
ssh user#host "./setEnv.sh && your-command"