I am trying to download a configuration file, which will automate os installation, from a tftp server to a libvirt virtual macine guest . I can download file from host without problem but it can not be download in guestos. From guesos, I can ping the server, the curl -O tftp://serverip/file command stucks. I can see that the server is accessed but somehow tftp protocol related traffic is not being forwarded completely I believe.
I have found an old post at tftp-for-libvirt-hosts-behind-nat, I have changed nic card to e1000, but the behaviour has not changed.
Another post at iptables rules to forward tftp via NAT which is completely foreing to me.
Any help much appreciated.
Related
I have a windows PC
I have installed Ubuntu server on my Vmware and switched to Bridge Network
Now I installed webmin
sudo service webmin start
with ssl=1
also done this
sudo iptables -A INPUT -p tcp -d 0/0 -s 0/0 --dport 10000 -j ACCEPT
I can access webmin from my computer and on my LAN
also via browser on any device on my wifi https://192.168.187.129:10000/
But I cannot access this from outside network
But i cannot use this outside of my lan.
I can connect with ssh on my lan only
also done sudo ufw allow 10000
No answer on this
https://superuser.com/questions/1122496/cant-acces-webmin-outside-the-virtual-machine-running-it-virtualbox-ubuntu-s
Enable port forwarding on your router. 192.168 is reserved for internal networks and cannot be routed across the Internet. Your router will have it's on external IP address and you will need to enable port forwarding so that when you hit externalIP:10000 it gets forwarded to 192.168.187.129:10000.
Of course, this will mean that Webmin is exposed to anyone on the Internet who wants to try to log in, so make sure you set strong passwords. You may want to consider locking it down so that only a subset of external IP's can connect as well.
I'm trying to use the private key from my openpgp card from my Debian laptop to a RPi. I followed the different hints found on google, in particular:
extra-socket in ~/.gnupg/gpg-agent.conf
removed it again when founding that this extra socket already will be created in /run/user/<uid>/gnupg
forward this socket using ~/.ssh/config
Host homegear
HostName homegear
RemoteForward ~/.gnupg/S.gpg-agent /run/user/1000/gnupg/S.gpg-agent.extra
changed the order of the both sockets in the RemoteForward line since I'm always confused which one should be the first one
add the following into /etc/ssh/sshd_config of the RPi
StreamLocalBindUnlink yes
reload the gpg-agent on the laptop
open new ssh connection to RPi
But I always get
Warning: remote port forwarding failed for listen path ~/.gnupg/S.gpg-agent
when connecting to the RPi.
openssh on both laptop and RPi is 7.4 (Debian Stretch), gpg is 2.1.18.
Forwarding the agent connect to be used as ssh private key (for connecting to gitlab from RPi) works perfectly, forwarding gpg private key (for signing commits) doesn't. I'm a bit helpless at the moment. Is there anything obviously wrong? Or is there still a problem with forwarding unix domain socket and I need to use the socat workaround?
Thank you!
I've run into exactly the same issue except between two Macs running 10.14.2 and GPG 2.2.11, and the only way I was able to get it to work was to specify the absolute path to the sockets on both ends. :( Having a relative path for either the remote or local socket both failed in various ways, which makes it a bit of a pain if you're connecting as different usernames on various machines.
I was able to work around that by specifying a number of different Match exec blocks in my ~/.ssh/config:
# Source machine is a personal Mac, connecting to another personal Mac on my local network
Match exec "hostname | grep -F .core" Host *.core
RemoteForward /Users/virtualwolf/.gnupg/S.gpg-agent /Users/virtualwolf/.gnupg/S.gpg-agent.extra
# Source machine is a personal Mac, connecting to my Linux box
Match exec "hostname | grep -F .core" Host <name of the host block for my Linux box>
RemoteForward /home/virtualwolf/.gnupg/S.gpg-agent /Users/virtualwolf/.gnupg/S.gpg-agent.extra
# Source machine is my work Mac, connecting to my Linux box
Match exec "hostname | grep -F <work machine name>" Host <name of the host block for my Linux box>
RemoteForward /home/virtualwolf/.gnupg/S.gpg-agent /Users/<work username>/.gnupg/S.gpg-agent.extra
(SSH bits are taken from this answer).
I have a GCE Instance a Debian 1v CPU & 1.7GB. Then I followed the below tutorial and installed webmin on it.
https://www.howtoforge.com/tutorial/how-to-install-webmin-on-ubuntu-15-04/
The installation went successfully. Then I Created a Firewall exception on using the UFW and allowed port 10000.
sudo ufw allow 10000/tcp
But I was not able to access Webmin through the browser.
https://my-gce-instance-ip-address:10000.
Then i created firewall exception using the Google Cloud Console. Again tried the url it didnt work.
Then i thought this might be because of webmin is https mode. So i open the /etc/webmin/miniserv.conf and changed ssl=0. After that i restarted the webmin.
/etc/init.d/webmin restart
Then I tried the the url with Http, still I can't access.
I tried below command and checked the output. Accordingly Webmin is correctly running and listening on port 10000.
netstat -tulpn | grep :10000.
I can't seem to think what I am doing wrong. I have now spent several days on this without and solution in sight. Hope someone can kindly help me?
try this ... it's working for me
iptables -I INPUT 1 -p tcp --dport 10000 -j ACCEPT
service iptables save
/etc/init.d/iptables restart
open both link in Browser
https://your-IP:10000
and
http://your-IP:10000
you need to allow port 10000 from iptables
sudo iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT
this work for me
i'm using ubuntu 16.04
You don't need to do any firewall configuration in the instance itself. All firewall configuration is done in the Google Cloud console.
The steps I typically follow, as you show to have figured out in your comment, are:
Create the firewall rule, in it opening the particular port you need (10000 in the case of Webmin) for ingress TCP traffic, accepting connections from some IP range (e.g. 0.0.0.0/0), and specifying target tags to be later assigned to instances to which that rule shall apply.
Add one of those tags to the "network tags" section of some particular instance.
This alone should work, opening the port for your instance in the firewall.
I was almost creating another question here on SO when yours was suggested as a possible duplicate. I had followed the steps above on my Webmin machine, and yet the machine refused to connect on port 10000. As I kept writing the question, I figured out my particular problem: in the firewall rule, in the source IP range filter, I set the single meta-address 0.0.0.0 instead of the range 0.0.0.0/0. So, to anyone who has followed the steps above and still can't connect to their webmin installation, do check if your source range filter is correctly set.
I have installed NetBSD 4.0.1 x68k on XM6i (http://www.ceres.dti.ne.jp/tsutsui/netbsd/x68k/NetBSD-x68k-on-XM6i.html) as a virtual machine emulating a 68030 platform. I have gotten everything to work except networking.
According to the documentation, you need to install a TAP-Win32 network adapter from OpenVPN installer, which I have. I have set the ipv4 settings of this adapter to IP address: 192.168.2.1 and Netmask: 255.255.255.0
In NetBSD, I have created a /etc/ifconfig.ne0 file to configure the ne0 network interface, which I assume represents the TAP-Win32 adapter. This file sets IP address to 192.168.2.17 and Netmask to 255.255.255.0
When i use "ping 192.168.2.1" on NetBSD I am unable to ping the host, the error message being: "host is down"
Does anyone know whats going wrong? If anyone could give me any advice I would be most grateful.
Update: Above problem has been solved .... but not quite.
If I have tinkered around with settings on host, now I can ping guest only if I run tcpdump -i ne0 on guest. Then after that I can also ping the host from the guest. I have tried restarting and trying without tcpdump but the changes didnt seem to stick, so i have to run tcpdump in order to setup the host only connection.
Is there any way I can do this without tcpdump and make the fix stick?
Edit: Here is the link to the new question with a more detailed explanation of the problem: Host Only connection NetBSD to Windows
It turns out to run a complete networking emulation on a 68030 machine on the latest version of XM6i, you need to run tcpdump on boot. There is no way around it.
I am trying to execute an MPI program in 2 different PCs. However, when I ran this command in pc1:
mpirun -hosts user#host -n 4 bin/Demo_01.exe
I'm getting this error:
[proxy:0:0#pc2] HYDU_sock_connect (./utils/sock/sock.c:203): unable to connect from "pc2" to "pc1" (Connection refused)
[proxy:0:0#pc2] main (./pm/pmiserv/pmip.c:209): unable to connect to server ubuntu at port 57395 (check for firewalls!)
Although I configured SSH connections as without password and disabled firewalls on each machines, the error is still there. My operating system is Ubuntu 12.04 and mpi is MPICH2.
Is there anyone to help?
the error is caused by the the client not connecting back to server as it doesnt know the ip of the server i.e
..main (./pm/pmiserv/pmip.c:209): unable to connect to server ubuntu at...etc
the fix is to add each of hostname and related ip in the /etc/hosts i.e
172.17.0.2 master
172.17.0.3 node1
172.17.0.4 node2
this should allow for bi-directional communication of the master and the node clients
I had the same error, but the accepted answer did not help me.
For me in the hosts file I had:
localhost:8
CPUX:2
I should of had:
CPUZ:8
CPUX:2
I.e the name of the node instead of localhost. Maybe this might help some one.
Fixed. After I followed these steps, the error disappeared:
Create administrator user accounts in both machines with the same username and password.
Define hostnames by editing the file: /etc/hosts
Make a clean install of ssh in both machines.
Configure ssh for connecting without a password. To do this follow these links:
http://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/ and http://dustymabe.com/2012/08/18/exchanging-ssh-keys-using-ssh-copy-id/
Locate the executable MPI program into the same paths in both machines.
montekristo_07's answer is mostly correct but not minimal; steps #2 and #3 are not strictly necessary.
You do not need to edit all your hosts' /etc/hosts files, and, if your LAN uses DHCP and you have any local DNS service running, you should not edit all your hosts' /etc/hosts files.
Insure that:
only externally-resolvable hostnames are referenced in your mpiexec command line (i.e. not "localhost"), and
the /etc/hosts file on the master (the machine on which you run mpiexec) does not have a line associating the public name of the master with the loopback address (127.0.0.1)
A simple test is to use literal IP addresses in your mpiexec command line. If this fixes your problem, then it's a hostname resolution problem...somewhere.
What is essential is to remember is that what is passed on your mpiexec command line, in particular host names, are going to be sent to and resolved on remote hosts.