use ftp to transfer a file between Mininet hosts - sdn

I want to start an ftp server on one Mininet host and access that server from another host, Here is what I've tried:
Installing vsftpd on the Mininet VM, the server works fine to access
the VM itself but I cannot figure out how to run the server on a
specific host, say a host with IP = 10.0.0.10.
I tried what this thread suggested, the second answer seemed
promising but sadly it did not work, after running the commands I
get the following error on the destination-host:
[connection refused]
To sum up: I would like to send a file between two Mininet hosts using ftp, I fail to start an ftp server on any specific host.
I am using Ubuntu 20.04 as my Mininet VM.

open the shell of the exact host by:
xterm "hostname"
then install your ftp server and start it.

Related

Can't connect to port 22, Connection timed out

I just recently got into whatever you might call this stuff. I was just trying to send a java file over to the computer I ssh to. But when I went to do it, I just get told
sh: connect to host port 22: Connection timed out
lost connection
If possible I would like it explained very simply because of how new I am to this kind of stuff.
SSH to remote host(VM Ubuntu) from VS code terminal
Install VS Code with Remote Development extension pack.
Install Virtual machine (Virtual box) and Ubuntu running on it.
Check Ubuntu-Network-Settings-IPv4 address (10.0.2.15-default for VM).
Go to your virtual box Settings-Network-NAT Adapter (double-check).
Go to your virtual box Settings-Network-In Advanced-Port Forwarding.
Add this as given below and click ok and hereafter consider 127.0.1.1 for ssh.
portforwardinginVM
View the status and disable firewall settings in Ubuntu VM (ufw command).
In VS Code, View-Command Palette- Add new SSH host .
Add ssh username#127.0.1.1 and enter.
Or go to the terminal window (eg. Powershell) and type ssh username#127.0.1.1, it will ask if you want to update to host lists permanently, asking like yes/no and also your Ubuntu password to confirm.
Now try to connect to the host using username#127.0.1.1 and selecting OS like Ubuntu, then type the Ubuntu password.
That's it you are logged in to your virtual machine and can access files now from your local machine.

Docker for Windows with existing hyper-v virtual machine

I have the following setup:
A Windows 10 Pro Laptop ("Win10Laptop") that has a Windows 10 Pro VM ("Win10VM") running on Hyper-V. I have created an nginx container by running the following command on the host machine:
docker run -d -p 80:80 --name webserver nginx
While the container is running I can access http://localhost from Win10Laptop and this works fine. My question is what do I need to configure to access nginx from Win10VM? Win10VM has only one network adaptor which is configured to use the "External" Vswitch connected to my Wifi interface.
Let me know if you need any more details. I've tried all sorts and can't figure it out!
Thanks,
Michael
You need to connect to the IP the VM has acquired on the External switch. Run ipconfig inside the VM to see what IP it has, then open http://<vm-ip> from your host.

Docker to run X applications while connected through SSH

I have used these instructions for Running Gui Apps with Docker to create images that allow me to launch GUI based applications.
It all works flawlessly when running Docker on the same machine, but it stops working when running it on a remote host.
Locally, I can run
docker --rm --ti -e DISPLAY -e <X tmp> <image_name> xclock
And I can get xclock running on my host machine.
When connecting remotely to a host with XForwarding, I am able to run X applications that show up on my local X Server, as anyone would expect.
However if in the remote host I try to run the above docker command, it fails to connect to the DISPLAY (usually localhost:10.0)
I think the problem is that the XForwarding is setup on the localhost interface of the remote host.
So the docker host has no way to connect to DISPLAY=localhost:10.0 because that localhost means the remote host, unreachable from docker itself.
Can anyone suggest an elegant way to solve this?
Regards
Alessandro
EDIT1:
One possible way I guess is to use socat to forward the remote /tmp/.X11-unix to the local machine. This way I would not need to use port forwarding.
It also looks like openssh 6.7 will natively support unix socket forwarding.
When running X applications through SSH (ssh -X), you are not using the /tmp/.X11-unix socket to communicate with the X server. You are rather using a tunnel through SSH reached via "localhost:10.0".
In order to get this to work, you need to make sure the SSH server supports X connections to the external address by setting
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then $DISPLAY inside the container should be set to the IP address of the Docker host computer on the docker interface - typically 172.17.0.1. So $DISPLAY will then be 172.17.0.1:10
You need to add the X authentication token inside the docker container with "xauth add" (see here)
If there is any firewall on the Docker host computer, you will have to open up the TCP ports related to this tunnel. Typically you will have to run something like
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
if you use ufw.
Then it should work. I hope it helps. See also my other answer here https://stackoverflow.com/a/48235281/5744809 for more details.

could not resolve hostname with scp

I am accessing an ubuntu server over ssh with putty on my windows machine and trying to download a single file to my local windows machine
my windows username is Mark and my hostname per cmd is Marks I am trying the following command on the remote server
scp backup.sql mark#marks:desktop
and I get could not resolve hostname I have tried to put in what I think myip address is and the connection times out
The syntax is this, relative to where you're issuing the command:
scp user#host_from:location/file user#host_to:location/file
And of course if you're local you can omit the user#host prefixes:
scp local_file me#host_to:~/local_file
The direction is always from > to relative to where you issue the command.
binarysubstrate is right about the syntax. The problem is, if the OP puts the name (or address) of his windows client in the 'to' part of the scp command, it probably won't work for a number of reasons:
his windows machine may not have a resolvable FQDN,
his windows machine may be behind a NAT firewall that is not setup to port-forward SSH requests,
he probably does not have an SSH daemon running on his windows machine.
To simply copy a file from the remote server down to a windows client, I would recommend WinSCP.
From the ser you ping your machine name ? Try replace machine name for the IP Address, or add your machine name to hosts configuration file from the server.

MPICH2 on multiple machines (HYDU_sock_connect error)

I am trying to execute an MPI program in 2 different PCs. However, when I ran this command in pc1:
mpirun -hosts user#host -n 4 bin/Demo_01.exe
I'm getting this error:
[proxy:0:0#pc2] HYDU_sock_connect (./utils/sock/sock.c:203): unable to connect from "pc2" to "pc1" (Connection refused)
[proxy:0:0#pc2] main (./pm/pmiserv/pmip.c:209): unable to connect to server ubuntu at port 57395 (check for firewalls!)
Although I configured SSH connections as without password and disabled firewalls on each machines, the error is still there. My operating system is Ubuntu 12.04 and mpi is MPICH2.
Is there anyone to help?
the error is caused by the the client not connecting back to server as it doesnt know the ip of the server i.e
..main (./pm/pmiserv/pmip.c:209): unable to connect to server ubuntu at...etc
the fix is to add each of hostname and related ip in the /etc/hosts i.e
172.17.0.2 master
172.17.0.3 node1
172.17.0.4 node2
this should allow for bi-directional communication of the master and the node clients
I had the same error, but the accepted answer did not help me.
For me in the hosts file I had:
localhost:8
CPUX:2
I should of had:
CPUZ:8
CPUX:2
I.e the name of the node instead of localhost. Maybe this might help some one.
Fixed. After I followed these steps, the error disappeared:
Create administrator user accounts in both machines with the same username and password.
Define hostnames by editing the file: /etc/hosts
Make a clean install of ssh in both machines.
Configure ssh for connecting without a password. To do this follow these links:
http://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/ and http://dustymabe.com/2012/08/18/exchanging-ssh-keys-using-ssh-copy-id/
Locate the executable MPI program into the same paths in both machines.
montekristo_07's answer is mostly correct but not minimal; steps #2 and #3 are not strictly necessary.
You do not need to edit all your hosts' /etc/hosts files, and, if your LAN uses DHCP and you have any local DNS service running, you should not edit all your hosts' /etc/hosts files.
Insure that:
only externally-resolvable hostnames are referenced in your mpiexec command line (i.e. not "localhost"), and
the /etc/hosts file on the master (the machine on which you run mpiexec) does not have a line associating the public name of the master with the loopback address (127.0.0.1)
A simple test is to use literal IP addresses in your mpiexec command line. If this fixes your problem, then it's a hostname resolution problem...somewhere.
What is essential is to remember is that what is passed on your mpiexec command line, in particular host names, are going to be sent to and resolved on remote hosts.