I need to take remote console of VM running on ESXi host. I need to take VNC for that purpose. To this to happen I need to assign TCP Port to each VM manually or programmatically (of course, by editing VMX file) using settings given below.
remotedisplay.vnc.port="5900"
remotedisplay.vnc.enabled="true"
remotedisplay.vnc.password = "yourpassword"
Is there any mechanism (preferrably psphere API) that I can tell ESX to assign port automatically for all machine or single machine for that matter?
Thanks & Regards,
Ganesh
PS. I'm using Ubuntu 14 and want to connect VMs via browser.
This is what I did to get it working using pysphere
>>> from pysphere import VIServer
>>> s = VIServer()
>>> s.connect('10.11.100.220', 'root', 'password')
>>> vm = s.get_vm_by_name("VMNAME")
>>> settings = {'remotedisplay.vnc.port': '8949', 'remotedisplay.vnc.enabled' : 'true'}
>>> vm.set_extra_config(settings)
>>> s.disconnect()
What you can do is ssh into the esx server and append the vmx file with those properties.
echo -e "RemoteDisplay.vnc.enabled = true\nRemoteDisplay.vnc.port = 5900\nRemoteDisplay.vnc.password = \"yourpassword\"" >> /vmfs/volumes/YOURDATASTORE/YOURVMNAME/YOURVMNAME.vmx
or in one command
sshpass -p PASSWORD ssh root#10.54.140.145 "echo -e \"RemoteDisplay.vnc.enabled = true\nRemoteDisplay.vnc.port = 5900\nRemoteDisplay.vnc.password = \"yourpassword\"\" >> /vmfs/volumes/YOURDATASTORE/YOURVMNAME/YOURVMNAME.vmx"
If that's not your vmx path, you can also dynamically get it using
vim-cmd vmsvc/getallvms | grep VMNAME | awk '{print $4}'
or all in one line
sshpass -p PASSWORD ssh root#10.54.140.145 "echo -e \"RemoteDisplay.vnc.enabled = true\nRemoteDisplay.vnc.port = 5900\nRemoteDisplay.vnc.password = \"yourpassword\"\" >> $(vim-cmd vmsvc/getallvms | grep VMNAME | head -1 | awk '{print $4}')"
Related
I have read lot of post about this problem but i still can not solve it on my side.
I have a server i used to connect like this:
$ ssh user#xxx.xx.xx.xxx -p yy
user = is not root
xxx.xx.xx.xxx = ipv4 of my server
yy = custom port for ssh
Connexion works well.
I try to make a copy of a folder from my local machine (ubuntu) to the server(ubuntu 14.04) like this:
$ scp -r -p /home/user/my/folder/ ssh://user#xxx.xx.xx.xxx:yy/home/user/my/folder/on/server/
I get this error:
ssh: Could not resolve hostname ssh: Name or service not known
lost connection
I guess the connexion works well. So what could happen? A problem with rights of the folder?
For information, my local machine get both ipv4 and ipv6 address. Could it be that?
Thank you in advance for any help.
jb
Check manual page for scp. It describe the usage of scp with all the switches and options:
scp [...] [-P port] [[user#]host1:]file1 ... [[user#]host2:]file2
Your command should be:
$ scp -r -p -P yy /home/user/my/folder/ user#xxx.xx.xx.xxx:/home/user/my/folder/on/server/
Note port comes as -P yy, you don't write ssh:// in front the user and separate host from the remote path using colon (:).
You don't need "ssh://".
Here scp believes ssh is the name of the server you want to copy to. That's what the message says : "Could not resolve hostname ssh"
Try :
$ scp -r -p -P yy /home/user/my/folder/ user#xxx.xx.xx.xxx/home/user/my/folder/on/server/
I have dispynode running on a remote server. I'm trying to open an SSH tunnel from my computer (client) and configure dispyJobCluster to use this tunnel. But it's not working. Am I not configuring this right ? Here's how I'm doing this :
( p.s . i don't have a deep knowledge in distributed & parallel computing nor networking, I'm a civil engineer so please excuse me if I don't use the right technical words sometimes)
SSH tunnel​ :
plink -v -ssh -L 61:localhost:21 user#myserver.net
This will forward connections to port 61 to localhost:21 on the server where dispynode is running
dispynode :
sudo dispynode.py -d --ext_ip_addr localhost -p 21 -i localhost
will listen on port 21 and transmit using localhost which leads it though the tunnel back to the client
with this dispyClient JobCluster code :
cluster = dispy.JobCluster( runCasterDispyWorker,
nodes=[('localhost',61)], \
ip_addr='localhost', \
ext_ip_addr='localhost', \
port = 61, \
node_port = 21, \
recover_file='recover.rec', \
)
When I launch the dispy.py I get the following error in the command prompt from which I opened the SSH tunnel :
Opening connection to localhost:21 for forwarding from 127.0.0.1:64027
Forwarded port closed
At least I guess this means that dipsy is trying to access the opened SSH tunnel but I'm not sure what's happening server side. It seems that dispynode receives nothing.
Running a quick traffic capture with TCPdump on the server confirms it. For some unknown reason, the port changes to 64027.
I have also tried to open 2 SSH tunnels simultaneously :
One for client-to-server communications
plink -v -ssh -L 61:localhost:21 user#myserver.net
One for server-to-client communications
plink -v -ssh -R 20:localhost:60 user#myserver.net
but with no luck. I'm not even sure whether it is best to use remote forwarding or local forwarding
I tried this solution that the developer of dispy himself suggested but it didn't work for me :
http://sourceforge.net/p/dispy/discussion/1771151/thread/bcad6eaa/
Is the configuration i used above wrong ? Should I use remote or local forwarding ? Why does the port change automatically, can it be because of my company's firewall blocking the connection through the ports i'm trying to use ? Has anyone managed to run dispy through an SSH tunnel before ?
This worked for me. It should work for you :
SSH tunnel ( i'm using PuTTY's plink.exe to create the tunnel ):
plink -v -ssh -R 51347:localhost:51347 [username on server]#[server's Public IP or DomainName] -pw [USER PASSWORD on server] -N
dispynode (running on the server - linux):
sudo dispynode.py -d --ext_ip_addr [public IP or domain name of server]
JobCluster (dipsyClient):
def Worker():
os.system('echo hello') #prints hello on the server running dispynode
return 0
import os
import dispy, logging
cluster = dispy.JobCluster( \
Worker, \
nodes=['IP public or domain name of server'], \
ext_ip_addr='localhost', \
recover_file='recoverdispy.rec', \
)
job = cluster.submit()
print "waiting for job completion"
job()
print('status: %s\nstdout: %s\nstderr: %s\nexception: %s' % (job.status, job.stdout, job.stderr, job.exception))
Try this piece of code .. Make sure the required ports are allowed to be used
I'm currently running a bunch of:
sudo ssh -L PORT:IP:PORT root#IP
where IP is the target of a secured machine, and PORT represents the ports I'm forwarding.
This is because I use a lot of applications which I cannot access without this forwarding. After performing this, I can access through localhost:PORT.
The main problem occured now that I actually have 4 of these ports that I have to forward.
My solution is to open 4 shells and constantly search my history backwards to look for exactly which ports need to be forwarded etc, and then run this command - one in each shell (having to fill in passwords etc).
If only I could do something like:
sudo ssh -L PORT1+PORT2+PORT+3:IP:PORT+PORT2+PORT3 root#IP
then that would already really help.
Is there a way to make it easier to do this?
The -L option can be specified multiple times within the same command. Every time with different ports. I.e. ssh -L localPort0:ip:remotePort0 -L localPort1:ip:remotePort1 ...
Exactly what NaN answered, you specify multiple -L arguments. I do this all the time. Here is an example of multi port forwarding:
ssh remote-host -L 8822:REMOTE_IP_1:22 -L 9922:REMOTE_IP_2:22
Note: This is same as -L localhost:8822:REMOTE_IP_1:22 if you don't specify localhost.
Now with this, you can now (from another terminal) do:
ssh localhost -p 8822
to connect to REMOTE_IP_1 on port 22
and similarly
ssh localhost -p 9922
to connect to REMOTE_IP_2 on port 22
Of course, there is nothing stopping you from wrapping this into a script or automate it if you have many different host/ports to forward and to certain specific ones.
For people who are forwarding multiple port through the same host can setup something like this in their ~/.ssh/config
Host all-port-forwards
Hostname 10.122.0.3
User username
LocalForward PORT_1 IP:PORT_1
LocalForward PORT_2 IP:PORT_2
LocalForward PORT_3 IP:PORT_3
LocalForward PORT_4 IP:PORT_4
and it becomes a simple ssh all-port-forwards away.
You can use the following bash function (just add it to your ~/.bashrc):
function pfwd {
for i in ${#:2}
do
echo Forwarding port $i
ssh -N -L $i:localhost:$i $1 &
done
}
Usage example:
pfwd hostname {6000..6009}
jbchichoko and yuval have given viable solutions. But jbchichoko's answer isn't a flexible answer as a function, and the opened tunnels by yuval's answer cannot be shut down by ctrl+c because it runs in the background. I give my solution below solving both the two flaws:
Defing a function in ~/.bashrc or ~/.zshrc:
# fsshmap multiple ports
function fsshmap() {
echo -n "-L 1$1:127.0.0.1:$1 " > $HOME/sh/sshports.txt
for ((i=($1+1);i<$2;i++))
do
echo -n "-L 1$i:127.0.0.1:$i " >> $HOME/sh/sshports.txt
done
line=$(head -n 1 $HOME/sh/sshports.txt)
cline="ssh "$3" "$line
echo $cline
eval $cline
}
A example of running the function:
fsshmap 6000 6010 hostname
Result of this example:
You can access 127.0.0.1:16000~16009 the same as hostname:6000~6009
In my company both me and my team members need access to 3 ports of a non-reachable "target" server so I created a permanent tunnel (that is a tunnel that can run in background indefinitely, see params -f and -N) from a reachable server to the target one. On the command line of the reachable server I executed:
ssh root#reachableIP -f -N -L *:8822:targetIP:22 -L *:9006:targetIP:9006 -L *:9100:targetIP:9100
I used user root but your own user will work. You will have to enter the password of the chosen user (even if you are already connected to the reachable server with that user).
Now port 8822 of the reachable machine corresponds to port 22 of the target one (for ssh/PuTTY/WinSCP) and ports 9006 and 9100 on the reachable machine correspond to the same ports of the target one (they host two web services in my case).
Another one liner that I use and works on debian:
ssh user#192.168.1.10 $(for j in $(seq 20000 1 20100 ) ; do echo " -L$j:127.0.0.1:$j " ; done | tr -d "\n")
One of the benefits of logging into a server with port forwarding is facilitating the use of Jupyter Notebook. This link provides an excellent description of how to it. Here I would like to do some summary and expansion for all of you guys to refer.
Situation 1. Login from a local machine named Host-A (e.g. your own laptop) to a remote work machine named Host-B.
ssh user#Host-B -L port_A:localhost:port_B
jupyter notebook --NotebookApp.token='' --no-browser --port=port_B
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-B but see it in Host-A.
Situation 2. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C. This is usually the case for most analytical servers within universities and can be achieved by using two ssh -L connected with -t.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C
jupyter notebook --NotebookApp.token='' --no-browser --port=port_C
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-C but see it in Host-A.
Situation 3. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C and finally login to the remote work machine Host-D. This is not usually the case but might happen sometime. It's an extension of Situation 2 and the same logic can be applied on more machines.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C -t ssh -L port_C:localhost:port_D user#Host-D
jupyter notebook --NotebookApp.token='' --no-browser --port=port_D
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-D but see it in Host-A.
Note that port_A, port_B, port_C, port_D can be random numbers except common port numbers listed here. In Situation 1, port_A and port_B can be the same to simplify the procedure.
Here is a solution inspired from the one from Yuval Atzmon.
It has a few benefits over the initial solution:
first it creates a single background process and not one per port
it generates the alias that allows you to kill your tunnels
it binds only to 127.0.0.1 which is a little more secure
You may use it as:
tnl your.remote.com 1234
tnl your.remote.com {1234,1235}
tnl your.remote.com {1234..1236}
And finally kill them all with tnlkill.
function tnl {
TUNNEL="ssh -N "
echo Port forwarding for ports:
for i in ${#:2}
do
echo " - $i"
TUNNEL="$TUNNEL -L 127.0.0.1:$i:localhost:$i"
done
TUNNEL="$TUNNEL $1"
$TUNNEL &
PID=$!
alias tnlkill="kill $PID && unalias tnlkill"
}
An alternative approach is to tell ssh to work as a SOCKS proxy using the -D flag.
That way you would be able to connect to any remote network address/port accesible through the ssh server as long as the client applications are able to go through a SOCKS proxy (or work with something like socksify).
If you want a simple solution that runs in the background and is easy to kill - use a control socket
# start
$ ssh -f -N -M -S $SOCKET -L localhost:9200:localhost:9200 $HOST
# stop
$ ssh -S $SOCKET -O exit $HOST
I've developed loco for help with ssh forwarding. It can be used to share ports 5000 and 7000 on remote locally at the same ports:
pip install loco
loco listen SSHINFO -r 5000 -r 7000
First It can be done using Parallel Execution by xargs -P 0.
Create a file for binding the ports e.g.
localhost:8080:localhost:8080
localhost:9090:localhost:8080
then run
xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE> < port-forward
or you can do a one-liner
echo localhost:{8080,9090} | tr ' ' '\n' | sed 's/.*/&:&/' | xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE>
pros independent ssh port-forwarding, they are independent == avoiding Single Point of Failure
cons each ssh port-forwarding is forked separately, somehow not efficient
second it can be done using curly brackets expansion feature in bash
echo "ssh -vNTC $(echo localhost:{10,20,30,40,50} | perl -lpe 's/[^ ]+/-L $&:$&/g') <REMOTE>"
# output
ssh -vNTC -L localhost:10:localhost:10 -L localhost:20:localhost:20 -L localhost:30:localhost:30 -L localhost:40:localhost:40 -L localhost:50:localhost:50 <REMOTE>
real example
echo "-vNTC $(echo localhost:{8080,9090} | perl -lpe 's/[^ ]+/-L $&:$&/g') gitlab" | xargs ssh
Forwarding 8080 and 9090 to gitlab server.
pros one single fork == efficient
cons by closing this process (ssh) all forwarding are closed == Single Point of Failure
You can use this zsh function (probably works with bash, too)(Put it in ~/.zshrc):
ashL () {
local a=() i
for i in "$#[2,-1]"
do
a+=(-L "${i}:localhost:${i}")
done
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT "$1" "$a[#]"
}
Examples:
ashL db#114.39.161.24 6480 7690 7477
ashL db#114.39.161.24 {6000..6050} # Forwards the whole range. This is simply shell syntax sugar.
I have web server, I can use it with ssh connection, ssh root#ip
But I can't access it as web server. I tested simply python tool, http.server and also tested installing and starting httpd.
Output for:
# ifconfig eth0 | grep inet | awk '{ print $2 }'
addr:[ip address that I'm using for ssh]
addr:
OS: Centos
What else I must do?
Problem was with firewall. Solved using this instruction: Configure iptables
I have written a small bash script which needs an ssh tunnel to draw data from a remote server, so it prompts the user:
echo "Please open an ssh tunnel using 'ssh -L 6000:localhost:5432 example.com'"
I would like to check whether the user had opened this tunnel, and exit with an error message if no tunnel exist. Is there any way to query the ssh tunnel, i.e. check if the local port 6000 is really tunneled to that server?
Netcat is your friend:
nc -z localhost 6000 || echo "no tunnel open"
This is my test. Hope it is useful.
# $COMMAND is the command used to create the reverse ssh tunnel
COMMAND="ssh -p $SSH_PORT -q -N -R $REMOTE_HOST:$REMOTE_HTTP_PORT:localhost:80 $USER_NAME#$REMOTE_HOST"
# Is the tunnel up? Perform two tests:
# 1. Check for relevant process ($COMMAND)
pgrep -f -x "$COMMAND" > /dev/null 2>&1 || $COMMAND
# 2. Test tunnel by looking at "netstat" output on $REMOTE_HOST
ssh -p $SSH_PORT $USER_NAME#$REMOTE_HOST netstat -an | egrep "tcp.*:$REMOTE_HTTP_PORT.*LISTEN" \
> /dev/null 2>&1
if [ $? -ne 0 ] ; then
pkill -f -x "$COMMAND"
$COMMAND
fi
Autossh is best option - checking process is not working in all cases (e.g. zombie process, network related problems)
example:
autossh -M 2323 -c arcfour -f -N -L 8088:localhost:80 host2
This is really more of a serverfault-type question, but you can use netstat.
something like:
# netstat -lpnt | grep 6000 | grep ssh
This will tell you if there's an ssh process listening on the specified port. it will also tell you the PID of the process.
If you really want to double-check that the ssh process was started with the right options, you can then look up the process by PID in something like
# ps aux | grep PID
Use autossh. It's the tool that's meant for monitoring the ssh connection.
We can check using ps command
# ps -aux | grep ssh
Will show all shh service running and we can find the tunnel service listed
These are more detailed steps to test or troubleshoot an SSH tunnel. You can use some of them in a script. I'm adding this answer because I had to troubleshoot the link between two applications after they stopped working. Just grepping for the ssh process wasn't enough, as it was still there. And I couldn't use nc -z because that option wasn't available on my incantation of netcat.
Let's start from the beginning. Assume there is a machine, which will be called local with IP address 10.0.0.1 and another, called remote, at 10.0.3.12. I will prepend these hostnames, to the commands below, so it's obvious where they're being executed.
The goal is to create a tunnel that will forward TCP traffic from the loopback address on the remote machine on port 123 to the local machine on port 456. This can be done with the following command, on the local machine:
local:~# ssh -N -R 123:127.0.0.1:456 10.0.3.12
To check that the process is running, we can do:
local:~# ps aux | grep ssh
If you see the command in the output, we can proceed. Otherwise, check that the SSH key is installed in the remote. Note that excluding the username before the remote IP, makes ssh use the current username.
Next, we want to check that the tunnel is open on the remote:
remote:~# netstat | grep 10.0.0.1
We should get an output similar to this:
tcp 0 0 10.0.3.12:ssh 10.0.0.1:45988 ESTABLISHED
Would be nice to actually see some data going through from the remote to the host. This is where netcat comes in. On CentOS it can be installed with yum install nc.
First, open a listening port on the local machine:
local:~# nc -l 127.0.0.1:456
Then make a connection on the remote:
remote:~# nc 127.0.0.1 123
If you open a second terminal to the local machine, you can see the connection. Something like this:
local:~# netstat | grep 456
tcp 0 0 localhost.localdom:456 localhost.localdo:33826 ESTABLISHED
tcp 0 0 localhost.localdo:33826 localhost.localdom:456 ESTABLISHED
Better still, go ahead and type something on the remote:
remote:~# nc 127.0.0.1 8888
Hallo?
anyone there?
You should see this being mirrored on the local terminal:
local:~# nc -l 127.0.0.1:456
Hallo?
anyone there?
The tunnel is working! But what if you have an application, called appname, which is supposed to be listening on port 456 on the local machine? Terminate nc on both sides then run your application. You can check that it's listening on the correct port with this:
local:~# netstat -tulpn | grep LISTEN | grep appname
tcp 0 0 127.0.0.1:456 0.0.0.0:* LISTEN 2964/appname
By the way, running the same command on the remote should show sshd listening on port 127.0.0.1:123.
#!/bin/bash
# Check do we have tunnel to example.com server
lsof -i tcp#localhost:6000 > /dev/null
# If exit code wasn't 0 then tunnel doesn't exist.
if [ $? -eq 1 ]
then
echo ' > You missing ssh tunnel. Creating one..'
ssh -L 6000:localhost:5432 example.com
fi
echo ' > DO YOUR STUFF < '
stunnel is a good tool to make semi-permanent connections between hosts.
http://www.stunnel.org/
If you are using ssh in background, use this:
sudo lsof -i -n | egrep '\<ssh\>'