I'm sorry, but too bad in network manual configuration for systemd and archlinux.
Can you please help me telling me how I can configure netctl to get at the end
[root#test etc]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.1/32 scope host venet0
inet 192.168.0.3/32 brd 192.168.0.3 scope global venet0:0
for a correct configuration of my openvz container?
Today I have
[root#02-Lab ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP> mtu 1500 qdisc noop state DOWN
link/void
Thank you a lot in advance for your help!
Related
I have two interfaces on my host:
16: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 group default qlen 1
link/ether b0:7b:25:04:57:0c
inet 172.20.179.101/16 brd 172.20.255.255 scope global dynamic
valid_lft forever preferred_lft forever
inet 192.168.0.100/24 brd 192.168.0.255 scope global dynamic
valid_lft forever preferred_lft forever
inet 192.168.168.10/24 brd 192.168.168.255 scope global dynamic
valid_lft forever preferred_lft forever
inet 192.168.169.10/24 brd 192.168.169.255 scope global dynamic
valid_lft forever preferred_lft forever
15: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 group default qlen 1
link/ether 00:13:3b:5a:a6:98
inet 192.168.0.61/24 brd 192.168.0.255 scope global dynamic
valid_lft 7196sec preferred_lft 7196sec
inet6 fe80::ac45:31f6:ce44:ca9d/64 scope link dynamic
valid_lft forever preferred_lft forever
Both interfaces are plugged into the same switch. The host runs Windows 11.
A Raspberry Pi running Raspbian 10 has its eth0 also attached to the same switch with
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether b8:27:eb:0e:c6:9a brd ff:ff:ff:ff:ff:ff
inet 172.20.179.102/16 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.0.211/24 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.168.192/24 brd 192.168.168.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::ba27:ebff:fe0e:c69a/64 scope link
valid_lft forever preferred_lft forever
If I listen for udp packets with nc -u -l 11235 on the host, and send packets from the Raspberry Pi via echo "test" > /dev/udp/<address>/<port>, I can only see the packets on the host if they are sent to 192.168.0.61 (which is on eth1) and not if they are sent to 172.20.179.101 (which is on eth0), although Wireshark captures them on both. (The other addresses on eth0 also don't work).
If I set a 172.20.x.x address on eth1 I can receive the packets.
What's going on here?
I have my load balancer machine currently which is servicing request in a round robin mechanism to the configured backend servers.
Now I want to configure a failover load balancer, so that it acts as a backup whenever my primary goes down. But before doing that for my primary load balancer I have created a floating IP address. But I see that I cannot access my web service using the floating IP address of the load balancer machine.
This site can’t be reached144.126.254.191 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Why am I unable to access the web service which was accessed using load balancer IP address using its floating IP address
I was using Digtal Ocean platform to create my droplets.
After assigned a floating IP to it from this page.
https://cloud.digitalocean.com/networking/floating_ips?i=0eb956
Now I need to get the private IP of my droplet using the command ip a
root#ubuntu-s-1vcpu-1gb-blr1-01:~# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:a0:A:B:C:D brd ff:ff:ff:ff:ff:ff
inet PUBLICIP/20 brd E.F.G.H scope global eth0
valid_lft forever preferred_lft forever
inet *PRIVATEIP(X.X.X.X)*/16 brd X.X.I.J scope global eth0
valid_lft forever preferred_lft forever
inet6 2400:6180:ZZ:ZZ::ZZ:ZZZZ/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::50a0:9fff:fe54:add2/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 9a:4b:a5:ZZ:ZZ:ZZ brd ff:ff:ff:ff:ff:ff
inet K.L.M.N/20 brd O.P.Q.R scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::984b:SSSS:TTTT:UUUU/64 scope link
valid_lft forever preferred_lft forever
I got the floating IP say, FLOATINGIPADDRESS
Floating IP works via Anchor IP present over eth0 interface. We can use the same private IP as any traffic sent over Floating IP will be sent to this private IP only i.e inet *X.X.X.X*/16 brd
Now I need HAProxy to bind to this private IP in my HAProxy cfg file.
sudo nano /etc/haproxy/haproxy.cfg
#HAProxy for web servers
frontend web-frontend
bind PRIVATEIP(X.X.X.X):80
bind LOADBALNCERIP:80
mode http
default_backend web-backend
backend web-backend
http-request set-header X-Forwarded-Proto https if { ssl_fc } # For Proto
http-request add-header X-Real-Ip %[src] # Custom header with src IP
option forwardfor # X-forwarded-for
balance roundrobin
server web-server1 IP1:80 check
server web-server2 IP2:80 check
server web-server3 IP3:80 check
server web-server4 IP4:80 check
listen stats
bind PRIVATEIP(X.X.X.X):8080
bind LOADBALNCERIP:8080
mode http
option forwardfor
option httpclose
stats enable
stats show-legends
stats refresh 5s
stats uri /stats
stats realm Haproxy\ Statistics
stats auth root:password #Login User and Password for the monitoring
stats admin if TRUE
default_backend web-backend
I am new to proxmox. I have created a VM having the Ubuntu 18.04 template on top of Proxmox OS. The problem is it is not getting connected to the internet. The proxmox server has an active LAN Connection.
$ ping 8.8.8.8
connect: Network is unreachable
-----------------------------------------------
$ cat /etc/network/interfaces
# ifupdown has been replaced by netplan(5) on this system. See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
# sudo apt install ifupdown
-----------------------------------------------------------
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0#if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group
default qlen 1000
link/ether f2:15:10:e9:c3:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::f015:10ff:fee9:c383/64 scope link
valid_lft forever preferred_lft forever
---------------------------------------------------------
$ /etc/resolv.conf
# --- BEGIN PVE ---
search infra.vitwit.com
nameserver 8.8.8.8
# --- END PVE ---
if it's your Ubuntu virtual machine network interface eth0
there is no set IP address, and of course, you can not connect to the internet.
Also, make sure your DHCP network can distribute IP addresses to Proxmox or enter a static address.
show the output of those commands: ip link, ip a, ip route
I am running a virtual machine Phoenix, inside QEMU from exploit.education in Kali Linux. It is pre-installed with the newest version of OpenSSH; however, I get an error whenever I try to connect to the machine with SSH.
I used the command ip a s in my Kali machine. It displayed the following results:
$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:9a:60:f4 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.12/24 brd 192.168.10.255 scope global dynamic noprefixroute eth0
valid_lft 85248sec preferred_lft 85248sec
inet6 fe80::a00:27ff:fe9a:60f4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
inet 172.16.19.1/24 brd 172.16.19.255 scope global vmnet1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fec0:1/64 scope link
valid_lft forever preferred_lft forever
4: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
inet 192.168.43.1/24 brd 192.168.43.255 scope global vmnet8
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fec0:8/64 scope link
valid_lft forever preferred_lft forever
I ran the following commands in NMAP to determine the IP:
$ nmap 172.16.19/24
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-25 13:19 EDT
Nmap scan report for 172.16.19.1 (172.16.19.1)
Host is up (0.00022s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
902/tcp open iss-realsecure
$ nmap 192.168.43.1/24
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-25 13:19 EDT
Nmap scan report for 192.168.43.1 (192.168.43.1)
Host is up (0.00033s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
902/tcp open iss-realsecure
The NMAP results indicate that 172.16.19.1 and 192.168.43.1 are up and running, but oddly enough, don't show port 22; I still tried to connect to it with SSH.
$ ssh user#172.16.19.1
ssh: connect to host 172.16.19.1 port 22: Connection refused
$ ssh user#192.168.43.1
ssh: connect to host: 192.168.43.1 port 22: Connection refused
I also checked whether the virtual machine was listening on port 22, and it seems like it is:
$ netstat -latun | grep :::22
tcp6 0 0 :::22 :::* LISTEN -
Is there something I'm doing wrong? What can I do to fix this problem?
It was running on localhost, and since port 22 is forwarded through port 2222 in localhost, you have to use the command: ssh user#localhost -p 2222.
So the issue is that when i do ssh through ip and directly through ssh command, i'm able to login with my key without providing any password, however, when I tries to use ssh command through vagrant and use hostname instead of ip it's asking for password
Need password
(venv) dans-test-mbp:public_network dantest$ vagrant ssh host1
vagrant#127.0.0.1's password:
Last login: Wed Jun 12 16:33:35 2019 from 10.100.174.129
[vagrant#localhost ~]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
valid_lft 84345sec preferred_lft 84345sec
inet6 fe80::5054:ff:fe26:1060/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:9d:49:76 brd ff:ff:ff:ff:ff:ff
inet 10.100.172.113/22 brd 10.100.175.255 scope global noprefixroute dynamic eth1
valid_lft 343545sec preferred_lft 343545sec
inet6 fe80::a00:27ff:fe9d:4976/64 scope link
valid_lft forever preferred_lft forever
Don't need Password
(venv) dans-test-mbp:public_network dantest$ ssh vagrant#10.100.172.113
Last login: Wed Jun 12 16:26:10 2019 from 10.0.2.2
[vagrant#localhost ~]$ exit
logout
I use this config to create vagrant files:
(venv) dans-test-mbp:public_network dantest$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
host_ips = [
"host1",
"host2",
"host3",
"host4"
]
host_ips.each do |host_name|
config.vm.define host_name do |host|
host.vm.box = "centos/7"
config.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)"
host.vm.synced_folder ".", "/home/vagrant/sync", disabled: true
host.vm.provider "virtualbox" do |vb|
# Customize the amount of memory on the VM:
vb.memory = "4048"
# Allow vm to send data via VPN
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
end
end
end
so, I got the issue in my approach, basically, I was overwriting vagrant pub key with my key, thus vagrant was not able to ssh into it but I was. So, now the issue is how do I get vagrant pub key?
You can always pull down vagrant's default public key from here: https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub
Vagrant will usually replace this key with a "secure" key on boot-up.