SSH into EC2 binding to private IP, not public IP - ssh

I've got an EC2 instance which has been set up to access a secure server via a VPN. The second server will only respond to calls that are bound to the EC2 server's public IP address.
I'm successfully using ssh to access the EC2 instance from my laptop...
ssh -i mypem.pem ubuntu#ec2-my-public-ip-address.eu-west-1.compute.amazonaws.com
but my command line sets up as:
ubuntu#my-private-ip-address:~$
So, when I try and run a piece of java code on the EC2 server, which makes a call to the secure server via the VPN it is failing because it is using the private IP address as its identifier. The java code can't be provided here, because it is for a secure service, but it has been extensively tested with other examples and on EC2 and we know that there isn't a problem here.
I'm trying to see if there's a way to ensure that any code I execute from the EC2 server uses my public IP address rather than the private IP address is this possible?

The private IP address is the only IP address that an EC2 instance knows about.
The public IP address is translated to/from the private IP by the EC2 network infrastructure using automatic static NAT, so the instance is never actually aware of it.
Check ifconfig and you will notice that the public IP is nowhere to be seen.
Yet, if you $ curl ipv4.icanhazip.com (or any other "what is my IP?" service), you'll find that your instance's public IP address will always be returned as the address seen by the external service.
Using the private IP internally automatically causes the public IP address to be used when you access the Internet.

You can connect ssh or ftp tools with EC2 instance private IP.
1) You need to create an Elastic IP. Navigate to EC2 service > Network Security.
2) Associate this Elastic IP with EC2 instance private ip.
3) Update inbound rules in security group of your EC2 instance. Add SSH port 22 with source "My IP" option.
Now you can use terminal:
ssh ec2-user#<elastic ip>
Make user you have installed your EC2 key pair certificate. If not run following:
ssh-add -K <.pem file>

Related

how to ssh forward local traffic through two proxy hops and back out to internet

I need to reach an ftp host that has whitelisted some ip addresses. In order to access the host via these whitelisted ips I need to jump through HOST B that is not publicly accessible. I need to jump through HOST B via HOST A that is publicly accessible.
I want to use an ftp client locally to access the ip-restricted ftp host. How can I do this via combination of ssh config file and ssh commands ?
I tinkered but was unable to get anything sensible.

Use sshuttle to route traffic to company's VPN server

I need to access company's internal network without using their OPENVPN server directly (My ISP blocks it). So I used an instance with a public IP, where my company is located, and have configured a OPENVPN client then used it to connect to the company's OPENVPN server.
(public IP instance) ===OPENVPN===> (Company)
Now, I need to achieve a further thing, which is working from my local machine by using VPN over SSH tunnel using sshuttle, such that the topology becomes:
(local) ===SSHUTTLE===> (public IP instance) ===OPENVPN===> (Company)
Note that public IP instance has two network adapters; eth0 (it has public IP) and tun0 (which belongs to OPENVPN)
I installed sshuttle, and tested the next command:
sshuttle --dns -r <user>#<public IP instance address> 0.0.0.0/0
It says connected after then but I still cant access anything. I tested dig and it returned results showing addresses of company's internal services. However, I still can't ping them. I tested using traceroute and it stops at some point after displaying some hops.
One important point is that I can't ping the tun0 address (on public ip instance) from my local machine.
I suspect that I need to add some routes on the intermediate public IP instance, but I am not sure.
I would appreciate any help
Thanks in advance
your setup is right but your assumptions are wrong.
Initially, check that your vpn is working fine on the jump box , if linux just check
route -n
Wrong assumptions:
sshuttle will route your dig commands , sshutle only route TCP and DNS queries are UDP
using --dns in your sshuttle meanless as you wont gain dns of vpn but of the jump box and that wont work
you should add the DNS of local vpn in your /etc/resolv.conf with target domain for local discovery
like : < call tech support to provide you with right DNS , you can find it in vpn log on jump box
search companydomain.internal
nameserver 10.x.y.z
its better to split the traffic and only target your company CIDR over sshuttle , most of them use parts of 10.0.0.0/8 instead of all traffic 0.0.0.0/0
important note: that may be your company block egress traffic to the internet over VPN access

Ssh into server from within ZeroTier network only

I would like:
1) all devices within a ZeroTier network to be able to ssh into each other via ZeroTier IPs.
2) No devices from outside the network to be able to ssh into the network neither via ZeroTier IPs or standard public IPs.
The issue is that in spite of having my devices on the same ZT network, I can still ssh into those via public IP. How do I prevent this?
IP address for eth0: 104.xxx.xx.xxx (public IP)
-> should not be able to ssh using this IP
IP address for ztxxxxxxxx: 10.xxx.xx.xx (ZeroTier IP)
-> should be able to ssh using this IP
Many thanks.
You need to bind the sshd process to the ip given by zerotier to your server,
follow the steps on this link: http://www.geekpills.com/operating-system/linux/how-to-limit-ip-binding-in-ssh-server

AWS EC2 Public IP vs Private IP

I am new to using EC2 and have a question which hopefully is easy to answer - I have a public IP and DNS and a private IP and DNS for my EC2 instance. From my laptop workstation , I can ping public-IP , no problem. When I ssh to the public-ip from my laptop ssh ubuntu#public-ip this takes me to the private ip prompt ubuntu#private-ip. I believe the Network Address Translator is coming in the way and translating the public IP to the private IP and ssh's me in to the private IP. An ifconfig there shows me the private ip as expected. The problem is now I cannot do a ping laptop IP from the EC2 instance private IP as expected.
P:S. Here is an excerpt from the amazon.com that may be relevant here:
Each instance that receives a public IP address is also given an external DNS hostname; for example, ec2-203-0-113-25.compute-1.amazonaws.com. We resolve an external DNS hostname to the public IP address of the instance outside the network of the instance, and to the private IPv4 address of the instance from within the network of the instance. The public IP address is mapped to the primary private IP address through network address translation (NAT). For more information about NAT, see RFC 1631: The IP Network Address Translator (NAT).
What I want, is to be able to ssh to the public IP ( the prompt should show ubuntu#public IP instead of ubuntu#private IP ) so I can ping back and forth between my laptop and the EC2 instance.
Any help is greatly appreciated.
best
Rohan
The ping issue is unrelated to what you see.
What you see is always -- without exception -- how EC2 works, with public IP addessses. The instance is only aware of its own private IP, and the infrastructure handles an automatic 1:1 NAT between private and public addresses.
I touched on this in Why do we need private subnets in VPC?
If you can't ping the laptop, the problem is most likely on the laptop end.
Try to ping 8.8.8.8 from your EC2 instance. Or ping stackoverflow.com. Ping anything that is known to be pingable.
Alternately, use a remote looking glass, like this one to ping your laptop. Does it work?
If pinging from the instance to any destination doesn't work, then the only other explanation that comes to mind is that you might have changed the instance's outbound security group settings without understanding the implications of the change... or you've done something with iptables that wasn't what you intended... but I assume you would have mentioned these.
For SSH to private ip you should be in the private network of your VPC it can be done via VPN

Setting up a CNAME / Nickname for a remote server

Let's say I have a digital ocean droplet - 68.456.72.184
When ssh-ing into my remote server, I'd rather not have to type out the whole ssh command -
ssh 68.456.72.184
The host's name is Stormtrooper - how do I make it so that client machines can ssh into the server via
ssh Stormtrooper
I imagine this requires some sort of configuration on the local client machine that's connecting? In what order does does a client machine search for host names? I imagine there's some local setting where it looks for "Stormtrooper"'s IP address, and if not found it it looks in the local network, and then looks in the "global" network (i.e. public DNS).
I'm not quite sure how that lookup process works, so an explanation there would be great as well.
You can create local ssh_config in ~/.ssh/config with a content:
Host Stormtrooper
Hostname 68.456.72.184
And then you can ssh to that server using ssh Stormtrooper (even tab completion will work for you).
Connecting using FQDN will work too if you have correctly set up DNS. If you have a domain Stormtrooper.tld pointing to this IP, you are able to ssh using
ssh Stormtrooper.tld
For local network resolving, you would need local DNS, which would do this translation for you.