I have tried with coturn configuration in my local system using my local IP address. It worked. But now, I'm trying to configure my public IP to avoid the ICE Framework. Is it possible? and I'm doing this using Cygwin, can I able to configure this in my system with my public IP address?
If using command line for configuration use, -L private_IP -X public_IP
Or, in config file, set
external-ip=public_ip/private_ip and use private IP as listening IP.
For aws system I am using this command
sudo turnserver -v -o -a -user username:key -f -L private_ip -X public_ip -E private_ip -min-port=minport_number -max-port=max_port_number -r public_ip --no-tls --no-dtls
Related
I have a ssh command as below:
ssh -o ProxyCommand="ssh ubuntu#ip_addr -W %h:%p" ubuntu#ip_addr2 -L port:ip_addr3:port
I want to create a config file for this command, but I don't know what is the option of -L, here is my config file so far:
Host cassandra-khatkesh
User ubuntu
Hostname ip_addr2
ProxyCommand ssh ubuntu#ip_addr -W %h:%p
Anyone knows how can I add -L to config file?
-L corresponds to the LocalForward keyword.
Host cassandra-khatkesh
User ubuntu
Hostname ip_addr2
ProxyCommand ssh ubuntu#ip_addr -W %h:%p
LocalForward port ip_addr3:port
Note that the local and remote endpoints are specified separately, not as single :-delimited string.
I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.
I have read lot of post about this problem but i still can not solve it on my side.
I have a server i used to connect like this:
$ ssh user#xxx.xx.xx.xxx -p yy
user = is not root
xxx.xx.xx.xxx = ipv4 of my server
yy = custom port for ssh
Connexion works well.
I try to make a copy of a folder from my local machine (ubuntu) to the server(ubuntu 14.04) like this:
$ scp -r -p /home/user/my/folder/ ssh://user#xxx.xx.xx.xxx:yy/home/user/my/folder/on/server/
I get this error:
ssh: Could not resolve hostname ssh: Name or service not known
lost connection
I guess the connexion works well. So what could happen? A problem with rights of the folder?
For information, my local machine get both ipv4 and ipv6 address. Could it be that?
Thank you in advance for any help.
jb
Check manual page for scp. It describe the usage of scp with all the switches and options:
scp [...] [-P port] [[user#]host1:]file1 ... [[user#]host2:]file2
Your command should be:
$ scp -r -p -P yy /home/user/my/folder/ user#xxx.xx.xx.xxx:/home/user/my/folder/on/server/
Note port comes as -P yy, you don't write ssh:// in front the user and separate host from the remote path using colon (:).
You don't need "ssh://".
Here scp believes ssh is the name of the server you want to copy to. That's what the message says : "Could not resolve hostname ssh"
Try :
$ scp -r -p -P yy /home/user/my/folder/ user#xxx.xx.xx.xxx/home/user/my/folder/on/server/
I have some Docker containers, that contains several OSes. So I would like to make reacheable (via SSH) these containers directly from the Internet. I can use up only one public IP address. Now there is docker0 in bridge mode with its default IP. How can I configure Docker to make accessible containers separately from everywhere?
You do this by mapping each of your containers ssh port to a different port on the public ip address.
Like:
$ docker run -d -p 22000:22 --name sshcontainer1 some_image
$ docker run -d -p 22001:22 --name sshcontainer2 some_image
$ docker run -d -p 22002:22 --name sshcontainer3 some_image
...
Then you communicate this port [to your customer]. Done.
The docker documentation has an example of setting an ssh server.
https://docs.docker.com/examples/running_ssh_service/
I've setup a VM on Fedora 17 with KVM and have configured a bridge network for the KVM. Both the host and the VM use manual IP configuration, with the host's IP as 192.168.0.2, the VM's 192.168.0.10.
From the VM I can connect to the host without any problems, but from the host I can't SSH to the VM,even though I still can ping the KVM from the host. Trying to ssh just gives me the result "no route to host".
Oh, I have iptables disabled so I don't think this is the problem of the firewall.
Also ensure that the kernel is configure for ip forwarding:
$ sudo sysctl -a | grep net.ipv4.ip_forward
net.ipv4.ip_forward = 1
It should have a value of 1, not 0. If needed, enable with these commands:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
There are two ways :
* Using proxy tunnel to create a channel for host from guest :
From guest run following command :
ssh -L 2000:localhost_ip:2000 username#hostip
explore ssh man to get the inside.
* Difficult to setup, but proper configuration while running guest :
follow
http://www.cse.iitd.ernet.in/~prathmesh/random.html#Connecting_qemu_guest_to_real_network