Unable to ping to a host - ssh

I'm able to ssh to a host from my machine but when I try to ping the host from my machine, it says 100% packet loss !!
So my query is that what all could be the possible reasons behind this behavior (able to SSH but unable to PING through the same machine).
NOTE: All communication were tried using IP address of the target host.

Two common reasons:
Firewall. in local host, target host, or somewhere in route between hosts.
ICMP echo responsing is disabled in the target host.
If DNS query is used, ping and ssh tools may select different IP from the response. For example ping may select IPv6 address, and ssh IPv4 address. --> Try tools with IP addresses instead of host names.

Related

SSH session to computer, located behind Carrier-grade NAT and additional NAT

I have a Server computer (Windows) with public IP address and number of Node computers (Linux), which located behind CGNAT and additional NAT with private IP addresses.
Nodes are familiar with Server IP address and its credentials.
I need to configure Nodes in the way that SSH session from Server to any Node will be possible.
I guess that SSH remote port forwarding should be used at Nodes but not sure in which way.
Any suggestions ?

Use sshuttle to route traffic to company's VPN server

I need to access company's internal network without using their OPENVPN server directly (My ISP blocks it). So I used an instance with a public IP, where my company is located, and have configured a OPENVPN client then used it to connect to the company's OPENVPN server.
(public IP instance) ===OPENVPN===> (Company)
Now, I need to achieve a further thing, which is working from my local machine by using VPN over SSH tunnel using sshuttle, such that the topology becomes:
(local) ===SSHUTTLE===> (public IP instance) ===OPENVPN===> (Company)
Note that public IP instance has two network adapters; eth0 (it has public IP) and tun0 (which belongs to OPENVPN)
I installed sshuttle, and tested the next command:
sshuttle --dns -r <user>#<public IP instance address> 0.0.0.0/0
It says connected after then but I still cant access anything. I tested dig and it returned results showing addresses of company's internal services. However, I still can't ping them. I tested using traceroute and it stops at some point after displaying some hops.
One important point is that I can't ping the tun0 address (on public ip instance) from my local machine.
I suspect that I need to add some routes on the intermediate public IP instance, but I am not sure.
I would appreciate any help
Thanks in advance
your setup is right but your assumptions are wrong.
Initially, check that your vpn is working fine on the jump box , if linux just check
route -n
Wrong assumptions:
sshuttle will route your dig commands , sshutle only route TCP and DNS queries are UDP
using --dns in your sshuttle meanless as you wont gain dns of vpn but of the jump box and that wont work
you should add the DNS of local vpn in your /etc/resolv.conf with target domain for local discovery
like : < call tech support to provide you with right DNS , you can find it in vpn log on jump box
search companydomain.internal
nameserver 10.x.y.z
its better to split the traffic and only target your company CIDR over sshuttle , most of them use parts of 10.0.0.0/8 instead of all traffic 0.0.0.0/0
important note: that may be your company block egress traffic to the internet over VPN access

Unable to SSH between guest VM's which are on different hosts in cluster

I'm having problems SSH'ing between ESXi guests that are on different hosts within the cluster. I've one guest that is on the routable cluster virtual network that I am using as a bastion server to access guests on a private network - the distributed port group spans all hosts.
I'm using SSH ProxyJump to route through the bastion host to the other guest VM's. When the guests on the private network are on the same cluster host as the bastion there is no problem. When the guests are on a different host, I get a connect refused by the remote server error. If I manually migrate the VM to the same cluster as the bastion, the error goes away.
I found this answer which relates to SSH'ing between ESXi hosts, not guests on hosts, and suggests that SSH Client needs to be allowed on the outgoing firewall of each host. It seems like it could be relevant, but my vSphere knowledge is limited and I don't have sufficient admin rights to make this change myself.
I'd be grateful if anyone could confirm if my inability to SSH between guests on different hosts is as a result of not having SSH Client enabled in the outbound firewall or if there is some other reason why I can't get an SSH connection?
From the link you posted:
You need to open the required ssh ports in the ESXi firewall.
In the vSphere Client check the host -> Configuration -> Security Profile -> Firewall -> Properties
and enable "SSH Client" if you need outgoing scp connections resp. "SSH server" if you want to enable incoming scp connections.
Instead of opening SSH client for outgoing firewall of each host, please configure it this way:
Outgoing Server Receiving Server
SSH Client -> Outgoing firewall -> Incoming firewall -> SSH Server
It was an underlying network issue - physical switch was dropping my VLAN tagged packets as VLAN ID wasn't configured on it.

Forward server HTTP traffic to handle in another device via SSH Tunnel

I'm developing some webhook required direct access public domain to internal machine, thinking use SSH tunnel to forward data, or got alternative solution?
Hosting server & development machine are in same network
192.168.1.2/24 (Hosting server)
2nd machine is virtual mapping using forticlient firewall without static or dynamic IP in visible in hosting server, so is 1 way initial communication right now.
In this case possible to setup SSH tunnel forward all traffic from 192.168.1.2:80 to handle in development machine port 8080?
How to ssh syntax look like?
Thanks.
This could be done by setting up an SSH tunnel to the remote machine:
ssh -L localhost:80:localhost:8080 development-system
Every request to port 80 on the hosting-server is now forwarded to port 8080 on the development-system.
Please note, that the port 80 on the hosting-server could only be used, when you start the SSH command as root. Also note that the port 80 is only accessible from the hosting-server. To access the port 80 on the hosting-server from everywhere use the following:
ssh -L 80:localhost:8080 development-system
Be sure that you want that.
A good introduction to the topic could be found at
https://www.ssh.com/ssh/tunneling/example
https://unix.stackexchange.com/questions/115897/whats-ssh-port-forwarding-and-whats-the-difference-between-ssh-local-and-remot

Is it possible to change the incoming, but not outgoing SSH port in OS X Yosemite?

I SSH into my workstation, which is a mac running OS X Yosemite, daily. Unfortunately, I noticed a while back that enabling remote login into my machine has put it under the fire of many automated dictionary attacks trying to log in using the default port, 22.
To make my machine more secure, I changed the SSH port. To do so, I edited the /etc/services file, and changed the following two lines:
ssh 2123/udp # SSH Remote Login Protocol
ssh 2123/tcp # SSH Remote Login Protocol
That greatly reduced the number of dictionary attacks, but now when I try to SSH from my workstation to other machines, I always need to specify the port (which is usually port 22).
This is easy enough for most simple tasks, just specify the port when SSHing in:
ssh -p22 me#another.computer.com
It becomes a pain for more complicated tasks where specifying the port is not an option, but it can still be done by adding an entry in ~/.ssh/config:
Host github.com
Hostname ssh.github.com
Port 443
Between these two options, I could always connect to any machine I wanted to connect to. However, I'm now writing a script that will connect to machines that will have different IP addresses (and domain names), and there is no optional argument to specify the port number.
I have also been getting frustrated that it does not default to port 22 for outgoing connections, but I do not want to change my incoming port back to 22.
Is it possible to change the incoming SSH port, but still have the default outgoing SSH port? That is, can I only allow people to login to my workstation using port 2123, but when I try connecting to other machines, the default port it tries to use is port 22?
I'm running OS X 10.10.2 Yosemite.
Change the ssh port back in `/etc/services' - that sets the defined port for the ssh protocol.
Then change the port that sshd listens on. On OS-X this is more complicated than it need be. See https://serverfault.com/questions/18761/how-to-change-sshd-port-on-mac-os-x