I want to connect via ssh from one ec2 machine to another (to use ansible). Something very strange happen.
The command with the DNS works on my machine and on the ec2 machine:
ssh -t -i ~/.aws/keypair.pem -o StrictHostKeyChecking=no ubuntu#ec2-54-154-0-12.eu-west-1.compute.amazonaws.com
But this command with the public IP works from my own computer but not from the ec2 machine (timeout) :
ssh -t -i ~/.aws/keypair.pem -o StrictHostKeyChecking=no -o User=ubuntu 54.76.190.253
Is someone understand why ?
Have you updated both the Security Groups and ACL's for the incoming server to allow traffic (TCP 22 in this case) from the source machine?
If you are trying to connect from within the same VPC in EC2, then you need to try connecting to the internal IP rather than external if all you have allowed is the internal IP range (the default security group allows for this iirc).
Personally within EC2, I only use internal ranges within the same Region as VPC rules are easier to keep secured given most external IP addresses will dynamically change as you stop/start instances if you don't use elastic IPs on them.
Related
I would like to understand the concept of SSH tunneling in detail as I am learning a few things around this topic. I have gone through some details in public forum but still got a few questions.
An SFTP service is running in a remote server and I have been given credentials to connect to it. I am using GUI like WinScp to connect the remote server. What's the role of SSH tunneling here?
Remote SFTP Server admin asked me to generate RSA public key from my machine and its added to the remote server. Now, I can directly connect to the server from SSH terminal without password. What's the role of SSH tunneling here?
Is tunneling implicit or need to be called explicitly for certain circumstances?
Please clarify.
SSH tunneling, SSH console sessions and SFTP sessions are functionally unrelated things.
They can be used simultaneously during single session but usually it is not the case so do not try to find any relation or role of tunneling in ssh/sftp session.
It does not makes sense to mix ssh tunneling with multiple ssh/sftp sessions.
Basically you would use dedicated ssh session for tunneling and extra sessions for console and transfers.
What the heck SSH tunneling is?
Quite often both parties (you and server) reside in different networks where arbitrary network connections between such networks are impossible.
For example server can see on its network workstation nodes and service nodes which are not visible to outside network due to NAT.
The same is valid for the user who initiates connection to the remote server:
so you (ssh client) can see your local resources (worstation nodes and server nodes) but can't see nodes on network of remote server.
Here comes ssh tunneling.
SSH tunnel is NOT a tool to assist ssh related things like remote console ssh sessions and secure file transfers but quite other way around - it is ssh protocol who assists you with building transport to tunnel generic TCP connections the same way TCP proxy works. Once such pipe is built and in action it does not know what is getting transferred via such pipe/tunnel.
Its concept is similar to TCP proxy.
TCP proxy runs on single node so it serves as acceptor of connections and as iniciator of outgoing connections.
In case of SSH tunneling such concept of TCP proxy is split in two halves - one of the nodes (participating in ssh session) performs role of listener(acceptor of connections) and second node performs role of proxy (i.e. initiates outgoing connections).
When you establish the SSH session to the remote server you can configure two types of tunnels which are active while your ssh connection is active.
Multiple ssh clients use notations like
R [IP1 :] PORT1 : IP2 : PORT2
L [IP1 :] PORT1 : IP2 : PORT2
The most confusing/hard part to understand in this ssh tunneling thing are these L and R markers/switches(or whatever).
Those letter L and R can confuse beginners quite a lot because there are actually 6(!!!) parties in this game(each with its own point of view of what is local and what is remote):
you
ssh server
your neighbors who want to expose theirs ports to anyone who sees the server
your neighbors who want to connect to any service server sees
anyone who sees the server and want to connect to any service your
neighbor provides (opposite side/socket of case #3)
any service in a local network of server who wants to be exposed to
your LAN (opposite side/socket of case#4)
In terms of ssh client these tunnel types are:
"R" tunnel (server listens) - YOU expose network services from your LOCAL LAN to remote LAN (you instruct sshd server to start listening ports at remote side and route all incoming connections )
"L" tunnel (you listens) - Server exposes resources of its REMOTE LAN to your LAN (your ssh client starts listening ports on your workstation. your neighbors can access remote server network services by connecting to the ports of your workstation. server makes outgoing connections to local services on behalf of your ssh client)
So SSH tunneling is about providing access to the service which typically is inaccessible due to network restrictions or limitations.
And here is simple conter-intuitive rule to remember while creating tunnels:
to open access to Remote service you use -L switch
and
to open access to Local service you use -R switch
examples of "R" tunnels:
Jack is your coworker(backend developer) and he develops server-side code at his workstation with IP address 10.12.13.14. You are team lead (or sysadmin) who organizes working conditions. You are sitting in the same office with Jack and want to expose his web server to outside world through remote server.
So you connect to ssh server with following command:
ssh me#server1 -g -R 80:ip-address-of-jack-workstation:80
in such case anyone on the Internet can access Jack's current version of website by visiting http://server1/
Suppose there are many IoT Linux devices (like raspberry pi) in the world sitting in multiple home networks and thus not accessible from outside.
They could connect to the home server and expose theirs own port 22 to the server for admin to be able to connect to all those servers.
So RPi devices could connect to the server in a such way:
RPi device #1
ssh rpi1#server -R 10122:localhost:22
RPi device #2
ssh rpi1#server -R 10222:localhost:22
RPi device #3
ssh rpi1#server -R 10322:localhost:22
and sysadmin while being at server could connect to any of them:
ssh localhost -p 10122 # to connecto first device
ssh localhost -p 10222 # to connecto second device
ssh localhost -p 10322 # to connecto third device
admin on remote premises blocked ssh outgoing connections and you want production server to contact bitbucket through your connection...
#TODO: add example
Typical pitfalls in ssh tunneling:
mapping remote service to local priviledged port
ssh me#server -L 123:hidden-smtp-server:25 # fails
#bind fails due to priviledged ports
#we try to use sudo ssh to allow ssh client to bind to local port switches
sudo ssh me#server -L 123:hidden-smtp-server:25 # fails
#this usually results to rejected public keys because ssh looks for the key in /root/.ssh/id_rsa
#so you need to coerce ssh to use your key while running under root account
sudo ssh me#server -i /home/me/.ssh/id_rsa -L 123:hidden-smtp-server:25
exposing some service from local network to anyone through the public server:
typical command would be
ssh me#server -R 8888:my-home-server:80
#quite often noone can't connect to server:8888 because sshd binds to localhost.
#To make in work you need to edit /etc/ssh/sshd_config file to enable GatewayPorts (the line in file needs to be GatewayPorts yes).
my tunnel works great on my computer for me only but I would like my coworkers to access my tunnel as well
typical working command you start with would be
ssh me#server -L 1234:hidden-smtp-server:25
#by default ssh binds to loopback(127.0.0.1) and that is the reason why noone can use such tunnel.
#you need to use switch -g and probably manually specify bind interface:
ssh me#server -g -L 0.0.0.0:1234:hidden-smtp-server:25
I can tunnel my whole network traffic using sshuttle by this simple following command (which digitalocean is my IP address and I have a public key, saved there):
sudo sshuttle --dns -r root#digitalocean -x digitalocean 0/0
I don't know how to make the sshuttle tunnel through one specific port (like 8800) so that I can access my local nearby server and use a browser (e.g. Firefox) tunnelled through that port: using a Manual proxy with localhost:8800 address.
I know that I can use the following command to tunnel my traffic through a specific port (like 8800), but as I don't have the password of this digitalocean server, I'm not able to use ssh to access it.
sudo ssh -N -D 8800 root#digitalocean
If anyone is still looking for similar things this is in fact very straightforward:
sshuttle --dns -r user#host:8800 0/0 -x host
I'm installing Phabricator on AWS and the usual proxy/web/database setup works for HTTP/S. Now I want to add SSH access to the repos. How can I configure SSH access to the repositories? My usage is small and I'd rather not create complex or obscure setups.
If the elastic IP is associated with the proxy server, the proxy would have to proxy SSH requests to Phabricator. Will a SOCKS proxy work? Is there an easy way (i.e. package) to have a socks proxy connect to the web server whenever either one is rebooted?
Without the SOCKS proxy, it seems the alternative is to put the everything on one server, except the database of course. This means the web server (running Phabricator) will need to be in the public VPS with an elastic IP associated with it. That way HTTPS and SSH can connect to the same hostname.
Are there alternatives? Is my only option to setup everything on one server?
This is accomplished with IP and port forwarding. My initial search for port forwarding was overwhelmed by port forwarding in routers and firewalls.
Add the following to /etc/sysctl.d/50-ip_forward.conf. This works on openSUSE Leap and CentOS.
# Enable IP_FOWARD
net.ipv4.ip_forward = 1
Apply the file with sysctl.
sudo sysctl -p /etc/sysctl.d/50-ip_forward.conf
Then forward the SSH port and test. If you lose access to the server, reboot it from the AWS console. I already changed the admin port to another port before making this change. It's easier for me to add the Port to my ~/.ssh/config than it is for users to specify a different port in their git client.
sudo firewall-cmd --zone external --add-forward-port port=22:proto=tcp:toaddr=1.2.3.4
If everything checks out, make the change permanent.
sudo firewall-cmd --zone external --add-forward-port port=22:proto=tcp:toaddr=1.2.3.4 --permanent
Don't forget to add the necessary ports in the AWS security group. Reboot the server to make sure everything is kept across reboots.
NAT instance
As an added bonus let's turn the proxy into a NAT instance! The external zone already has masquerade enabled but is not used. Assign the eth0 interface to the external zone.
sudo firewall-cmd --zone external --add-interface eth0
In your VPC, add a default route for 0.0.0.0/0 with a target of the instance above to the private subnet. Now test internet connectivity from an internal server in the private subnet. Performance isn't an issue because this route is only for server updates, not production traffic.
ping 8.8.8.8
If everything works, make the change permanent.
sudo firewall-cmd --zone external --add-interface eth0 --permanent
I like to reboot after making network/firewall changes to make sure everything is kept across reboots.
Setup:
My computer (linux / unix) has an arbitrary IP address
I can connect to a central linux server which has a static ip
Remote linux systems are set up so they only respond to central server IP address on port 22
I want to port forward through the central server so I can use MySQLWorkbench and make python scripting connections on port 3306 to the remote systems.
Ideally, I would like the syntax for ssh command to make the port forwarding work;
Suppose I want to forward local port 3307 to 3306 on the remote system. Assume my ip is x.x.x.x, the central server IP is y.y.y.y, and the remote system IP is z.z.z.z;
I think it has something to do with ssh -L but I can only forward to the central server so far. Maybe I need to connect to the central server, set up forwarding there, then set up forwarding on my machine? I think functionality exists to do it with a single command using ssh.
If this is a duplicate, it should not be marked as such because without knowing what magic keyword to search for, you can't find the duplicate;
Clarification: port 3306 is NOT open on the remote server. Only 22
ssh -L :3307:z.z.z.z:3306 user#y.y.y.y -Nf
Works fine
or
ssh -L 3307:z.z.z.z:3306 user#y.y.y.y -Nf
To only bind to x.x.x.x's localhost
The first example binds to all interfaces
edit...
Just seen that z.z.z.z only has port 22 open.
on y.y.y.y you will also need to have a local port open
run on y.y.y.y
ssh -L 3307:localhost:3306 user#z.z.z.z -Nf
then on x.x.x.x
ssh -L 3307:localhost:3307 user#y.y.y.y -Nf
run these commands in a screen for best results
You can actually condense these 2 commands together
ssh -L 3307:localhost:3307 user#y.y.y.y -f 'ssh -L 3307:localhost:3306 user#z.z.z.z -Nf'
ssh -L <local-port-to-listen>:<remote-host>:<remote-port>
The āLā switch indicates that a local port forward is need to be created
Best method is to create the tunnel using putty (ssh client). so you can start the shell, and it will create the ssh tunnel for you. this is a good reference
https://howto.ccs.neu.edu/howto/windows/ssh-port-tunneling-with-putty/
I need to do some work on a server to which I don't have direct access to. I do have access to my company network (via vpn). If I were on that network, I could access the server directly. But, for some reason when I'm on the vpn, I can't access the server directly.
So, I need to ssh into an intermediary ubuntu box, and then create an ssh tunnel from that box to the server.
Then, I can do my work on my laptop and send it through a local tunnel that points to a foreign tunnel (on my ubuntu box) that goes to the server.
But I don't know how to do a tunnel that creates another tunnel to a third server.
Any ideas?
Thanks,
Scott
What are you trying to achieve? If you just want to get to a shell on the server then ssh into the Ubuntu box and then ssh from there to the server.
If you want to access some other network resource on the server then you want to forward a port from the server (where you can't get to it) to the Ubuntu box (where you can). Take a look at the -L option in ssh.
Edit:
Copying files to the server:
tar c path/* | ssh ubuntuName 'ssh serverName "tar x"'
Copying stuff back:
ssh ubuntuName 'ssh serverName "tar c path/*"' | tar x
Obviously you need to change ubuntuName, serverName and path/* to what you want. To use rsync you need the -E option and the same trick of wrapping one ssh command inside another. After reading your comment I'd say that the most general answer to your question is that the trick is making ssh execute a command on the target machine. You do this by specifying the command as an argument after the machine name. If you use ssh as the target command for ssh to execute then you get the two-hop behaviour that you are looking for. Then it is just a matter of playing with quotes until everything is escaped correctly.
It's just a double port forward. Forward the ports from the PC to the ubuntu box, then on the ubuntu box forward those destination ports to the final endpoint. It's been a while since I've done command line ssh (been trapped in windows hell :)), so I can't give the command line you need. Another possibility is to use the SOCKS proxy ability built into SSH.
To connect from your local machine over a second machine to a specific port on the third machine you can use the ssh -N -L option:
ssh -N second_machine -L 8080:third_machine:8082
This maps the Port 8082 on the third machine to port 8080 on the local machine (eg. http://localhost:8080/ ).