I have a Server computer (Windows) with public IP address and number of Node computers (Linux), which located behind CGNAT and additional NAT with private IP addresses.
Nodes are familiar with Server IP address and its credentials.
I need to configure Nodes in the way that SSH session from Server to any Node will be possible.
I guess that SSH remote port forwarding should be used at Nodes but not sure in which way.
Any suggestions ?
Related
I need to access company's internal network without using their OPENVPN server directly (My ISP blocks it). So I used an instance with a public IP, where my company is located, and have configured a OPENVPN client then used it to connect to the company's OPENVPN server.
(public IP instance) ===OPENVPN===> (Company)
Now, I need to achieve a further thing, which is working from my local machine by using VPN over SSH tunnel using sshuttle, such that the topology becomes:
(local) ===SSHUTTLE===> (public IP instance) ===OPENVPN===> (Company)
Note that public IP instance has two network adapters; eth0 (it has public IP) and tun0 (which belongs to OPENVPN)
I installed sshuttle, and tested the next command:
sshuttle --dns -r <user>#<public IP instance address> 0.0.0.0/0
It says connected after then but I still cant access anything. I tested dig and it returned results showing addresses of company's internal services. However, I still can't ping them. I tested using traceroute and it stops at some point after displaying some hops.
One important point is that I can't ping the tun0 address (on public ip instance) from my local machine.
I suspect that I need to add some routes on the intermediate public IP instance, but I am not sure.
I would appreciate any help
Thanks in advance
your setup is right but your assumptions are wrong.
Initially, check that your vpn is working fine on the jump box , if linux just check
route -n
Wrong assumptions:
sshuttle will route your dig commands , sshutle only route TCP and DNS queries are UDP
using --dns in your sshuttle meanless as you wont gain dns of vpn but of the jump box and that wont work
you should add the DNS of local vpn in your /etc/resolv.conf with target domain for local discovery
like : < call tech support to provide you with right DNS , you can find it in vpn log on jump box
search companydomain.internal
nameserver 10.x.y.z
its better to split the traffic and only target your company CIDR over sshuttle , most of them use parts of 10.0.0.0/8 instead of all traffic 0.0.0.0/0
important note: that may be your company block egress traffic to the internet over VPN access
I would like:
1) all devices within a ZeroTier network to be able to ssh into each other via ZeroTier IPs.
2) No devices from outside the network to be able to ssh into the network neither via ZeroTier IPs or standard public IPs.
The issue is that in spite of having my devices on the same ZT network, I can still ssh into those via public IP. How do I prevent this?
IP address for eth0: 104.xxx.xx.xxx (public IP)
-> should not be able to ssh using this IP
IP address for ztxxxxxxxx: 10.xxx.xx.xx (ZeroTier IP)
-> should be able to ssh using this IP
Many thanks.
You need to bind the sshd process to the ip given by zerotier to your server,
follow the steps on this link: http://www.geekpills.com/operating-system/linux/how-to-limit-ip-binding-in-ssh-server
I'm able to ssh to a host from my machine but when I try to ping the host from my machine, it says 100% packet loss !!
So my query is that what all could be the possible reasons behind this behavior (able to SSH but unable to PING through the same machine).
NOTE: All communication were tried using IP address of the target host.
Two common reasons:
Firewall. in local host, target host, or somewhere in route between hosts.
ICMP echo responsing is disabled in the target host.
If DNS query is used, ping and ssh tools may select different IP from the response. For example ping may select IPv6 address, and ssh IPv4 address. --> Try tools with IP addresses instead of host names.
I'm having problems SSH'ing between ESXi guests that are on different hosts within the cluster. I've one guest that is on the routable cluster virtual network that I am using as a bastion server to access guests on a private network - the distributed port group spans all hosts.
I'm using SSH ProxyJump to route through the bastion host to the other guest VM's. When the guests on the private network are on the same cluster host as the bastion there is no problem. When the guests are on a different host, I get a connect refused by the remote server error. If I manually migrate the VM to the same cluster as the bastion, the error goes away.
I found this answer which relates to SSH'ing between ESXi hosts, not guests on hosts, and suggests that SSH Client needs to be allowed on the outgoing firewall of each host. It seems like it could be relevant, but my vSphere knowledge is limited and I don't have sufficient admin rights to make this change myself.
I'd be grateful if anyone could confirm if my inability to SSH between guests on different hosts is as a result of not having SSH Client enabled in the outbound firewall or if there is some other reason why I can't get an SSH connection?
From the link you posted:
You need to open the required ssh ports in the ESXi firewall.
In the vSphere Client check the host -> Configuration -> Security Profile -> Firewall -> Properties
and enable "SSH Client" if you need outgoing scp connections resp. "SSH server" if you want to enable incoming scp connections.
Instead of opening SSH client for outgoing firewall of each host, please configure it this way:
Outgoing Server Receiving Server
SSH Client -> Outgoing firewall -> Incoming firewall -> SSH Server
It was an underlying network issue - physical switch was dropping my VLAN tagged packets as VLAN ID wasn't configured on it.
I want to be able to SSH into a VM Guest of Virtualbox where the guests are sharing a NAT Network.
LocalNat Portforwarding (See https://www.pythian.com/blog/test-lab-using-virtualbox-nat-networking/ Set Up Portforwarding) is inconvenient vs. having an IP address on the NAT for the host.
Port forwarding requires me to keep specifying the port, e.g. in scp -P 2222 from-file localhost: and it messes with SSH keys as localhost now has two host identities, my laptop and the VM's ssh-rsa key.
Rather than port-forward, is there not a way of just adding another IP for my Virtualbox host?
Thanks, Martin.
You can set up a host-only network in addition to the NAT network. A host-only network is a local network which can connect to both the host and to individual VMs. The host and the VMs can communicate with each other through it.
Using the virtualbox GUI, go to Virtualbox manager > File > Preferences > Network and set up a host-only network. Enable the DHCP server. You could use these settings:
host adapter address is 192.168.56.1
DHCP server address is 192.168.56.100
Both masks are 255.255.255.0
The server address range is 192.168.56.101-192.168.56.254
This gives you the addresses from ...56.2 through ...56.99 to use as static addresses. You can manually assign them to VM interfaces if you like.
After setting up this network, you should see a virtual interface on your host system with the correct IP address (the one assigned to the adapter).
Now, go to network settings for the VM. Add a new network adapter. Set "attached to" to the "host-only adapter", and the name to the host-only network that you set up earlier.
Start the VM. It should see the host-only adapter in addition to whatever adapters it was using before. If it's a modern operating system, it'll probably query the DHCP server and set up the interface on its own. Alternately, from inside the VM OS, you could manually assign static addresses to these interfaces.
You can assign a host-only adapter to a VM in addition to its existing NAT adapter. In the past I've had a windows VM and an Ubuntu Linux VM set up this way. Both VMs and the host had no trouble communicating with each other as well as the Internet.