Ssh into server from within ZeroTier network only - ssh

I would like:
1) all devices within a ZeroTier network to be able to ssh into each other via ZeroTier IPs.
2) No devices from outside the network to be able to ssh into the network neither via ZeroTier IPs or standard public IPs.
The issue is that in spite of having my devices on the same ZT network, I can still ssh into those via public IP. How do I prevent this?
IP address for eth0: 104.xxx.xx.xxx (public IP)
-> should not be able to ssh using this IP
IP address for ztxxxxxxxx: 10.xxx.xx.xx (ZeroTier IP)
-> should be able to ssh using this IP
Many thanks.

You need to bind the sshd process to the ip given by zerotier to your server,
follow the steps on this link: http://www.geekpills.com/operating-system/linux/how-to-limit-ip-binding-in-ssh-server

Related

SSH session to computer, located behind Carrier-grade NAT and additional NAT

I have a Server computer (Windows) with public IP address and number of Node computers (Linux), which located behind CGNAT and additional NAT with private IP addresses.
Nodes are familiar with Server IP address and its credentials.
I need to configure Nodes in the way that SSH session from Server to any Node will be possible.
I guess that SSH remote port forwarding should be used at Nodes but not sure in which way.
Any suggestions ?

Use sshuttle to route traffic to company's VPN server

I need to access company's internal network without using their OPENVPN server directly (My ISP blocks it). So I used an instance with a public IP, where my company is located, and have configured a OPENVPN client then used it to connect to the company's OPENVPN server.
(public IP instance) ===OPENVPN===> (Company)
Now, I need to achieve a further thing, which is working from my local machine by using VPN over SSH tunnel using sshuttle, such that the topology becomes:
(local) ===SSHUTTLE===> (public IP instance) ===OPENVPN===> (Company)
Note that public IP instance has two network adapters; eth0 (it has public IP) and tun0 (which belongs to OPENVPN)
I installed sshuttle, and tested the next command:
sshuttle --dns -r <user>#<public IP instance address> 0.0.0.0/0
It says connected after then but I still cant access anything. I tested dig and it returned results showing addresses of company's internal services. However, I still can't ping them. I tested using traceroute and it stops at some point after displaying some hops.
One important point is that I can't ping the tun0 address (on public ip instance) from my local machine.
I suspect that I need to add some routes on the intermediate public IP instance, but I am not sure.
I would appreciate any help
Thanks in advance
your setup is right but your assumptions are wrong.
Initially, check that your vpn is working fine on the jump box , if linux just check
route -n
Wrong assumptions:
sshuttle will route your dig commands , sshutle only route TCP and DNS queries are UDP
using --dns in your sshuttle meanless as you wont gain dns of vpn but of the jump box and that wont work
you should add the DNS of local vpn in your /etc/resolv.conf with target domain for local discovery
like : < call tech support to provide you with right DNS , you can find it in vpn log on jump box
search companydomain.internal
nameserver 10.x.y.z
its better to split the traffic and only target your company CIDR over sshuttle , most of them use parts of 10.0.0.0/8 instead of all traffic 0.0.0.0/0
important note: that may be your company block egress traffic to the internet over VPN access

VPN's IP of remote machine connected to that VPN

I would like to connect the remote machine to my local VPN and then ssh to that remote machine from the other machines in my local network.
Is this possible? Will the remote machine get new IP which will be visible in my local network? Do I need to configure anything manually?
I'm using FortiClient for VPN.
Yes this is absolutely possible. Try Following steps
1-Deploy VPN and assign the ipranges in DHCP public or private
2-Make Sure to turn off the firewall for vpn server for now
3-Turn off the Clients Firewall
4-Connect to VPN
5-If your connection loose try to see the client's IP from server
side and try to take SSH
6-Take ssh from your server
7- Ping the server from other local machines
8-Then enable the server side firewall and see the effect if ssh is
still possible if not make a rule for specific port for ssh

Unable to ping to a host

I'm able to ssh to a host from my machine but when I try to ping the host from my machine, it says 100% packet loss !!
So my query is that what all could be the possible reasons behind this behavior (able to SSH but unable to PING through the same machine).
NOTE: All communication were tried using IP address of the target host.
Two common reasons:
Firewall. in local host, target host, or somewhere in route between hosts.
ICMP echo responsing is disabled in the target host.
If DNS query is used, ping and ssh tools may select different IP from the response. For example ping may select IPv6 address, and ssh IPv4 address. --> Try tools with IP addresses instead of host names.

Unable to SSH between guest VM's which are on different hosts in cluster

I'm having problems SSH'ing between ESXi guests that are on different hosts within the cluster. I've one guest that is on the routable cluster virtual network that I am using as a bastion server to access guests on a private network - the distributed port group spans all hosts.
I'm using SSH ProxyJump to route through the bastion host to the other guest VM's. When the guests on the private network are on the same cluster host as the bastion there is no problem. When the guests are on a different host, I get a connect refused by the remote server error. If I manually migrate the VM to the same cluster as the bastion, the error goes away.
I found this answer which relates to SSH'ing between ESXi hosts, not guests on hosts, and suggests that SSH Client needs to be allowed on the outgoing firewall of each host. It seems like it could be relevant, but my vSphere knowledge is limited and I don't have sufficient admin rights to make this change myself.
I'd be grateful if anyone could confirm if my inability to SSH between guests on different hosts is as a result of not having SSH Client enabled in the outbound firewall or if there is some other reason why I can't get an SSH connection?
From the link you posted:
You need to open the required ssh ports in the ESXi firewall.
In the vSphere Client check the host -> Configuration -> Security Profile -> Firewall -> Properties
and enable "SSH Client" if you need outgoing scp connections resp. "SSH server" if you want to enable incoming scp connections.
Instead of opening SSH client for outgoing firewall of each host, please configure it this way:
Outgoing Server Receiving Server
SSH Client -> Outgoing firewall -> Incoming firewall -> SSH Server
It was an underlying network issue - physical switch was dropping my VLAN tagged packets as VLAN ID wasn't configured on it.