We've brought up this topic before, but curious if anyone has any new information on this issue.
We use multiple servers that are accessed behind a "management server", so when we SSH in we have to log in there first, then from there log into our destination machine so always at least 2 SSH connections. We currently use port forwarding on the management server by using : which will take us directly through to the server of interest behind the scenes so we think we're directly ssh'ing into each one.
The issue here is that it requires specific setup, and in a scalable environment where servers can be added/removed the maintenance is cumbersome. Ideally we'd just be able to ssh into multiple machines using phpseclib and run commands.
Has anyone ran into this or have advice on a solution from the scripting level? Basically we need to ssh chain and ssh into machine 1, then machine 2 from machine 1, and run commands/interact with machine 2.
$ssh = new Net_SSH2('machine1');
$ssh->login('user', 'pass');
$ssh->setTimeout(10);
$ssh->enablePTY();
$ssh->exec('ssh machine2');
echo $ssh->read();
At this point (assuming that you're using RSA authentication and that your private key is in your ~/.ssh/id_rsa file on machine) the prompt that you get back should be of machine 2.
You could connect to a machine3 as well by doing this:
$ssh = new Net_SSH2('machine1');
$ssh->login('user', 'pass');
$ssh->setTimeout(10);
$ssh->enablePTY();
$ssh->exec('ssh machine2');
echo $ssh->read();
$ssh->exec('ssh machine3');
echo $ssh->read();
Related
I am fairly new to ssh and still learning it. Recently I have made a tunnel connection with an ssh host and managed to successfully transfer data/files from my machine to the server with the command: scp file.extension user#hostIP:/directory/directory.
While this was successful, I am kinda struggling to reverse it, sending data/files from the server to the client. How would one go about completing that? Do I need to make some changes to ssh_config or just CLI commands are enough?
You need to change the order:
scp user#hostIP:/directory/directory file.extension
that's accomplishing the invert operation, off course, assuming that the address is correct, the file exists and you have the necessary privileges.
I migrated the vm from libvirt to Google Cloud Platform using Cloudendure. The initial sync is complete and is in Data Replication stage from over a week. Once the VM is launched using test mode and try to putty using ssh it throws Connection Refused exited with error code 255.
I tried to log in using my on-premise local machine username and SSH key with putty, As it is told in the Cloudendure documentation that I can log in to the replicated server using same credentials
The firewall rule in GCP and the machine allows port 22 for incoming connections. SSH key is also updated properly in metadata section and saying SSH key is not propagated properly.
I thought there is a problem with my local machine ufw rules and tried turning off firewall and replicated again but no use. Also tried adding SSH rule to ufw allow connections from 0.0.0.0/0 still I'm not able to connect to VM which is replicated and launched in test mode.
Steps tried:
I tried interactive console method where I tried to log in using serial-port, but the problem is it is asking for ID and password. Where I don't have PASSWORD and using only SSH keys to log-into.
Tried using Static IP for an instance. before replicating boot disk I added firewall rule allow SSH from that static-IP then I replicated and tried to login (assuming that it is blocking connection via this IP).
Followed this article to install Linux Guest OS.
Generated SSH key using ssh-keygen -t RSA -C "" in gcloud shell.
I cannot ssh into the Linux environment. Appreciate the help
Operating System: Ubuntu 18.04 LTS x64
ANy help would be greatful.
I have a problem setting up a ipython cluster on a Windows server and connecting to this ipcluster using a ssh connection. I tried following the tutorial on https://ipython.org/ipython/doc/dev/parallel/parallel_process.html#ssh, but I have problems to understand what the options mean exactly and what parameters are to use exactly...
Could anyone help a total noob to set up an ipcluster? (Let's say the remote machine has ip 192.168.0.1 and the local machine has 192.168.0.2)
If you scroll roughly to the middle of the page https://ipython.org/ipython-doc/dev/parallel/parallel_process.html#ssh you will find this:
Current limitations of the SSH mode of ipcluster are:
Untested and unsupported on Windows. Would require a working ssh on Windows. Also, we are using shell scripts to setup and execute
commands on remote hosts.
That means, there is no easy way to build an ipcluster with ssh connection on windows (if it works at all).
Do you really need to connect the machines with an ssh connection? I guess it's possible with a ssh client on each windows machine, but if you are in a trusted local network you can also decide not to use the loopback interface and just expose the ports...
Sure you can start controller and engine separately! For further examples about ports (if you have problems with firewalls) see also How to setup ssh tunnel for ipython cluster (ipcluster)
I'm working on a project that requires me to run my code on a remote Unix server, that is not available to connect to directly (you first have to log in to the "gate" node and then to this server).
What's really bad is that they disabled key authentication, so each time I need to ssh into it, I have to type in my password twice. It's really annoying and I wonder what's the best way to transfer my local modifications of source files to this server, compile and run them without having to provide those passwords so many times.
I have no sudo access to any of those servers (neither to this "gate", nor to this target server). Any ideas on how to make the whole process more efficient?
EDIT: Martin Prikryl provided a great answer below, but it's suitable for Windows and I'm on a Mac :) I guess it might be a good thing to have it documented here also for *NIX systems.
You are looking for SSH tunneling.
WinSCP SFTP client supports one-hop SSH tunneling natively.
See the Tunnel page on WinSCP Advanced Site Settings dialog.
I assume that after you transfer the file, you need to open SSH terminal to compile the file.
You may be able to make use of WinSCP Console window for that step.
Alternatively, if you need/want to use a real SSH terminal client, make use of an existing SSH tunnel, created by WinSCP, and connect with PuTTY (or any other SSH client) over it.
In the Local tunnel port of WinSCP Tunnel page, select a fixed port number (instead of the default Autoselect). In PuTTY enter "localhost" to Host Name and the selected port in Port.
(I'm the author of WinSCP)
This has probably been asked somewhere but I can't find it for the life of me.
I am currently setting up a server machine, and I want to make it so that only computers which are directly SSH'ing into the server and has an authorized key can get in. I've already gotten the keys to work, but I don't know how I should go about making sure that people can't multi-hop their way into the server machine. I want to know:
Is it even possible to disable multi-hopping by only changing settings on the server machine?
If it is, how do I go about doing it?
If not, what other options do I have to achieve what I'm trying to do?
I don't believe it's possible by only changing settings on the server.
If your server is called server and another machine on your network is called aux, then you need to disallow the following multi-hop methods, probably others as well:
ssh -t aux ssh server
ssh -o ProxyCommand='ssh aux /usr/bin/nc %h %p' server
ssh -N -L 2222:server:22 aux & ssh -p 2222 localhost
So you need to ensure that
ssh when run on any other machine on your network will refuse to connect to server, except when the user is logged in locally (not via ssh)
alternatively, ensure the sshd setting AllowAgentForwarding is set to no on all other machines on your network
the manpage notes that this "does
not improve security unless users are also denied shell access, as they can always install their own forwarders"
netcat and equivalents are not installed on any other machine on your network
the sshd setting AllowTcpForwarding is set to no on all other machines on your network
the manpage notes that this "does not improve
security unless users are also denied shell access, as they can always install their own forwarders"
This may be a bit much.
Perhaps you can keep the private keys embedded on hardware tokens that may not leave the building? This is beyond the limits of my experience, though.
You should get a better answer if you ask at ServerFault.com, and hopefully your question will be migrated there soon.