I am trying to connect my VM's instance hosted on google cloud platform with vncserver.
I have vnc server installed and everything on my instance vm. I tried to create a secure ssh tunnel like this : gcloud compute ssh --zone us-west1-a tunnel -- -N -p 22 -D localhost:5000
Then I'am trying to connect with my vm with vnc server : vnc server ip : externalipadress:5901
But it doesn't work.
I also tried this command : gcloud compute ssh --zone -- -N -p 5901 -D localhost:5901
but it says : (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]
Any idea ?
Related
I'm running a google cloud instance. I'm able to successfully connect to the instance via ssh.
But I'm not able to do the port forwarding to my localhost.
Here's the command I used:
ssh -L 16006:127.0.0.1:8080 username#instance_external_ip
When I run the above command , I get the following error
The authenticity of the host cannot be determined.
username#instance_external_ip : Permission Denied (public key)
How to solve this problem?
I found the answer for this question. The problem I had was that the server did not know the ssh keys. So, I did the following and it worked.
I deleted all the ssh keys in the my local machine and connect to my gcloud instance using the following command. gcloud command creates the ssh keys automatically and it transfers to the cloud ssh keys automatically. So, no need to manually copy paste the keys.
gcloud compute --project "project_name" ssh --zone "zone_name" "instance_name"
After this I connected to my instance using ssh. Before doing if you try to ssh tunnel , as the server won't be aware of the localhost, it will say permission denied on running ssh -L .....
Therefore, instead of directly connecting through ssh -L ... , connect along with ssh-key file stored in .ssh directory. Use the following command.
ssh -i ~/.ssh/google_compute-engine -L <ur localhost port number>:127.0.0.1:<remote_host_port> username#server_ip
I have a Spark installation running under YARN on a remote cluster, with a firewall between me and the head node. I can use a ssh tunnel to access the head node:
> ssh -N -f -L 10000:remotenode:10000 between_machine
and this setup works, for example, to access a HiveServer2 running on remotenote. If Spark was running in cluster mode, I would need to just do the same for the 7077 port and direct the pyspark client to localhost with
> ssh -N -f -L 7077:remotenode:7077 between_machine
> ./pyspark --master spark://localhost:7077
How can I do that with Spark running under the YARN scheduler?
If you are looking for a port to connect, here is a quote from the doc:
You can access this interface by simply opening
http://:4040 in a web browser. If multiple SparkContexts
are running on the same host, they will bind to successive ports
beginning with 4040 (4041, 4042, etc).
If you are just looking for a more universal way to get to the host via ssh "tunnel", you could try ssh working as socks proxy:
ssh user#host -D 20000
And then configuring your browser to connect via socks proxy (host - localhost, port - 20000).
I have dispynode running on a remote server. I'm trying to open an SSH tunnel from my computer (client) and configure dispyJobCluster to use this tunnel. But it's not working. Am I not configuring this right ? Here's how I'm doing this :
( p.s . i don't have a deep knowledge in distributed & parallel computing nor networking, I'm a civil engineer so please excuse me if I don't use the right technical words sometimes)
SSH tunnel​ :
plink -v -ssh -L 61:localhost:21 user#myserver.net
This will forward connections to port 61 to localhost:21 on the server where dispynode is running
dispynode :
sudo dispynode.py -d --ext_ip_addr localhost -p 21 -i localhost
will listen on port 21 and transmit using localhost which leads it though the tunnel back to the client
with this dispyClient JobCluster code :
cluster = dispy.JobCluster( runCasterDispyWorker,
nodes=[('localhost',61)], \
ip_addr='localhost', \
ext_ip_addr='localhost', \
port = 61, \
node_port = 21, \
recover_file='recover.rec', \
)
When I launch the dispy.py I get the following error in the command prompt from which I opened the SSH tunnel :
Opening connection to localhost:21 for forwarding from 127.0.0.1:64027
Forwarded port closed
At least I guess this means that dipsy is trying to access the opened SSH tunnel but I'm not sure what's happening server side. It seems that dispynode receives nothing.
Running a quick traffic capture with TCPdump on the server confirms it. For some unknown reason, the port changes to 64027.
I have also tried to open 2 SSH tunnels simultaneously :
One for client-to-server communications
plink -v -ssh -L 61:localhost:21 user#myserver.net
One for server-to-client communications
plink -v -ssh -R 20:localhost:60 user#myserver.net
but with no luck. I'm not even sure whether it is best to use remote forwarding or local forwarding
I tried this solution that the developer of dispy himself suggested but it didn't work for me :
http://sourceforge.net/p/dispy/discussion/1771151/thread/bcad6eaa/
Is the configuration i used above wrong ? Should I use remote or local forwarding ? Why does the port change automatically, can it be because of my company's firewall blocking the connection through the ports i'm trying to use ? Has anyone managed to run dispy through an SSH tunnel before ?
This worked for me. It should work for you :
SSH tunnel ( i'm using PuTTY's plink.exe to create the tunnel ):
plink -v -ssh -R 51347:localhost:51347 [username on server]#[server's Public IP or DomainName] -pw [USER PASSWORD on server] -N
dispynode (running on the server - linux):
sudo dispynode.py -d --ext_ip_addr [public IP or domain name of server]
JobCluster (dipsyClient):
def Worker():
os.system('echo hello') #prints hello on the server running dispynode
return 0
import os
import dispy, logging
cluster = dispy.JobCluster( \
Worker, \
nodes=['IP public or domain name of server'], \
ext_ip_addr='localhost', \
recover_file='recoverdispy.rec', \
)
job = cluster.submit()
print "waiting for job completion"
job()
print('status: %s\nstdout: %s\nstderr: %s\nexception: %s' % (job.status, job.stdout, job.stderr, job.exception))
Try this piece of code .. Make sure the required ports are allowed to be used
I'm trying to open an ipython-notebook (which is running on a server) on a macbook from a remote location through an ssh tunnel but no data received.
This is the command for the SSH tunnel
ssh -L 5558:localhost:5558 -N -t -x user#remote-host
and this is the command I used to lunch the notebook form the server
ipython notebook --pylab=inline --port=5558 --ip=* --no-browser --notebook-dir notebooks
Than I tried to open it on a new tab with this remote-host:5558 but no data received.
Thanks in advance!
The directive -L AAAA:somehost:BBBB will cause SSH to listen on port AAAA on localhost (the machine the ssh command is run on) and forward any connection to that port, over the SSH session, to the host somehost port BBBB. So, you need to open http://localhost:5558/ in the browser on the machine you run the ssh command on.
Read this: How do I add a kernel on a remote machine in IPython (Jupyter) Notebook?
Remote jupyter kernel/kernels administration utility (the rk) here: https://github.com/korniichuk/rk
How do I connect to a production machine using SSH tunnel? It has few blocked ports which I would like to connect from my development box and debug it.
Use the following command
ssh -N -v -L<LOCAL_PORT>:<PRODUCTION_MACHINE>:<PRODUCTION_PORT> <PRODUCTION_MACHINE>
E.g) ssh -N -v -L2047:my-production-server.com:8000 my-production-server.com