i've servers as A,B and C.A is windows machine and B & C are Linux.i'm able to ping both B and C from A.
A is able to telnet B on port 1521 but Telnet is not happening from A to C on the same port.
Can anyone gives me an idea about this issue? How we can Resolve? Does i've to add the routes on C seems like have to make some channges in C.C is able to accept connections from other clients.
If you can ping C from A, then your routes are correct. If you can telnet C from B, then your telnet on C also listens correctly (you can verify with netstat -tan, by the way). If C accepts telnet sessions from B but not from A, then you may have the situation that:
C filters or drops packets from A (check your firewall, iptables etc.)
or A supresses outgoing connections to C on your telnet port (firewall)
Related
I have three hosts: A, B, C. B can connect to C through ssh, via port 221. A cannot connect to C because it's behind a router, but can connect to B through ssh. What I need, is to connect from A to C.
The situation is summarized below:
A -- p22 ---> B OK
B -- p221---> C OK
A -- p???---> C not working
I have tried many variations of ssh tunneling but looks like I don't get how tunneling works. Also, I have no root privileges on any of the hosts, therefore I cannot do port forwarding on port 22. I am therefore not sure this tunneling can be done at all. If it can, however, I would appreciate the exact commands to run on each host so that I can finally ssh from A to C.
While you could set up an explicit tunnel in this situation, it's much more convenient to use the -J option
ssh -J B -p 221 C
or the ProxyJump option explicitly
ssh -o ProxyJump=B -p 221 C
ssh will first connect to B for you (prompting for a password if necessary), then connect to C from B. From your point of view, you will have connected directly to C.
The idea of ssh -L local_port:another_host:destination_port user#host is to say a/ start listening locally on local_port b/ connect to remote host (as usual), and once you're there, connect to that another_host and c/ forward everything you will receive locally to that another host's destination_port
so, I would try the following (from host A)
ssh -C -N -L 2222:C:221 user#B
then from another terminal
ssh -p 2222 user#localhost
I did not test the above. Happy to dig deeper if required.
Here is the human readable explanation (hopefully) :
starting from host A
ssh, connect as user on host B (no port specified as 22 is the default)
-C compress all content in transit in the tunnel
-N says to not open a tty (interactive) session on host B
-L says "once you're on B, start listening on this host (A) on port 2222 (as you are not root) and forward everything to C, port 221"
If you're using password authentication, it should work. Certificate authentication would require a bit of additional configuration on B to correctly forward your certificate to C (which exact syntax I don't remember right now)
I'm dealing with LXC, iptables and route, and at this point I'm not even sure what I'm doing anymore. For the sake of simplicity, every policy in iptables is set to ACCEPT and forwarding is set to 1 in sysctl.conf in each host or container.
My goal here is to be able to pass a ping request through an LXC container, from outside of its host. Let me clarify this:
Let's say I have a client C, who wants to ping a server S, but I have a gateway G in between, and an LXC container L within G.
C (eth0 192.168.0.3/24) <---> (eth0 192.168.0.2/24) G (eth1 192.168.1.3/24) <---> (eth0 192.168.1.4/24) S
then, inside G we would have :
(eth0 192.168.0.2/24) <---> (virbr0 10.0.0.2/24) L (virbr1 10.0.1.3/24) <---> (eth1 192.168.1.3/24)
So basically, I'd like to ping S from C but in such a way that the request must transit through L (and therefore through G), using iptables and route.
Hope you can help me out !
Could you share your reason for doing this? Is this for monitoring? Routing through a NAT is unnecessarily convoluted.
I suggest setting up a bridged network, rather than a NAT-ed one, where:
virbr0 is bridged with eth0
virbr1 is bridged eth1
This way, your LXC host can sport an IP address of 192.168.0.x and 192.168.1.x (ie. in the same subnet as eth0 and eth1).
Once that is done, create routing entries in both the server and client, using the LXC host as the router. Essentially L replaces G.
Let's assign 192.168.0.10 and 192.168.1.10 to L. The routed network will look like this:
C (192.168.0.3) <--> (192.168.0.10) L (192.168.1.10) <--> S (192.168.1.4)
Let me know if this works for you before I post the full answer. It's quite a bit of configuration.
I have 3 servers (a, b, c) and each of them need an ssh tunnel to port 4000 of another 2 instances.
I used to assign ports like 4001 goes to port 4000 on instance B and 4002 goes to 4000 on instance C, but it seems that using local ips 127.0.0/24 would be much less confusing, e.g. put 127.0.0.2 instance-a to /etc/hosts, then use ssh -L instance-a:4000:localhost:4000 instance-a.domain.com. Does this approach bear any negative effects? Should it be used?
I have three computers, A, B, C. A is the computer I'm working on, C is the remote computer I'd like to access. However C can only be accessed through B. Only B has a ssh server, and only A has a ssh client.
What command am I to use (preferably on A) so that I can connect to C (port 80) through B ? For example B should forward all incoming port 12345 to C:80.
I know this is a common question and I found a ton of commands on google but none seemed to work.
Once it is set up, I'm supposed to just use localhost:5678 on A, which connects to B:1234, and then forwards to C:80.
Thanks.
You need to use remote port forward:
From A run
ssh -R *:1234:C:80 you#B
Then you can access C by typing B:1234, but this also requires setting GatewayPorts to yes in /etc/ssh/sshd_config , and the restart sshd (this tells C to listen to all IP addresses, not just local IPs, so it can be accessed from the outside)
Once you logout from B, it will also disable the tunnel to C.
Suppose the network is like:
A(192.68.0.1)--------------------B(192.68.0.2)------------------C(192.68.0.3)
A is my ssh server, C is a target ssh server, and I can telnet from A to B(my account is not root).
B is a server not allow ssh login from others, but B can login to C via ssh.
Is it possible to connect C from A through B via ssh?
If you can run programs on B, you can use something like simpleproxy to forward the TCP connection to C.
Then you SSH from A to some port on B (not 22), which will forward your connection to C. Everything will still be encrypted since the SSH session is A<->C.
ok telnet to b
you can actually ssh to yourself on b, but the following command may not work but try it first
ssh -L0.0.0.0:2200:192.68.0.3:22 127.0.0.1 ...
if sshd is not running on b... then ssh to c
ssh -L0.0.0.0:2200:192.68.0.3:22 192.68.0.3
do a
netstat -an | grep 2200 -- Do this on b (192.68.0.2)
if the netstat has 127.0.0.1 listening on 2200 and not 0.0.0.0 this trick wont work... but if it does... you can then connect to ssh on port 2200 to b and it will hit c
ssh 192.68.0.2:2200
i have you ssh to localhost on b because i cant remember the command to not spawn a shell and im too lazy to look it up... but if the solution above does not work you wont be able to redirect ports with ssh without root, you would have to change the config file on b
you would have to add
GatewayPorts yes to the sshd config file in /etc/sshd/conf/sshd_config
http://docstore.mik.ua/orelly/networking_2ndEd/ssh/ch09_02.htm -- this talks all about port forwarding with ssh