Why keepalived vrrp does not add virtual ip to openvpn tun0? - load-balancing

I am running two instances of keepAlived on local Haproxy node and a remote Haproxy node, the local haproxy node works just fine but for remote it doesn't add virtual ip 10.8.0.2 to the remote node.
vrrp_instance RH_EXT {
state BACKUP
interface tun0
virtual_router_id 12
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass keepAlived123
}
virtual_ipaddress {
10.8.0.2
}
}
I am also seeing this error:
Dec 31 19:29:36 pi4-phl Keepalived_vrrp[28178]: (RH_EXT) ip address 10.8.0.1/32 dev tun0, no longer exist
Dec 31 19:29:38 pi4-phl Keepalived_vrrp[28178]: (RH_EXT) ip address associated with VRID 12 not present in MASTER advert : 10.8.0.2
maybe that is when when all the local nodes are down it doesn't load balance from the remote node.

Related

Could not create server TCP listening socket *:6379: bind: Address already in use Redis CentOS access remotely

I've set up Redis on a CentOS 8 Stream virtual machine on an ipv4 address. I've installed it, and configured it, but I cannot access it remotely, I've set:
bind 0.0.0.0
I used to have it set to this...
bind 127.0.0.1 0.0.0.0
However this meant that restarting redis would fail.
Now, whenever I check if Redis is running using the systemctl command, it's running, but when running redis-server within the box I get:
Could not create server TCP listening socket *:6379: bind: Address already in use
And I cannot access it remotely with:
redis-cli -h XXX.XXX.XXX.XXX -a mypass
What am I missing?
I just keep getting:
Could not connect to Redis at XXX.XXX.XXX.XXX:6379: Connection refused

Connect remote ssh to pc... pc connect vpn

I have:
-PC with ubuntu 18
-Install and configure ssh for remote access
-Open ssh port in my router
-My IP is dinamic, so I configure Dynamic DNS (www.noip.com).
I have remote access to my PC from another external computer, with domain no-ip and ssh port. No problem.
Now:
-I connect my PC for Tunnel VPN (openvpn) to a VPN server (VPNbook)
-Refresh my no-ip domain with the new public VPN IP.
-But I can't connect for ssh (domain no-ip and ssh port) to my PC...
Why? What am I missing?
Finally I found:
https://unix.stackexchange.com/questions/237460/ssh-into-a-server-which-is-connected-to-a-vpn-service
https://askubuntu.com/questions/893775/cant-ssh-to-server-with-vpn-connection
https://www.linode.com/community/questions/7381/openvpn-client-connected-to-a-server-while-listening-to-ssh
In my PC:
Connect VPN
List item
Execute:
ip rule add from 192.168.0.101 table 128
ip route add table 128 to 192.168.0.0/24 dev enp2s0f0
ip route add table 128 default via 192.168.0.1
Where:
192.168.0.101 -> Internal IP to my PC
192.168.0.0/24 -> subnet, calculate with "subnetcalc"
enp2s0f0 -> it is the name of my net interface
192.168.0.1 -> My default-gateway
Now, i have remote access for ssh.

Can't Ping Instances or SSH into Instances from Host

I have installed Devstack successfully in Ubuntu 16.04 in a VirtualBox VM.
enp0s3 :10.6.208.111
lo: 127.0.0.1
virbr0: 192.168.122.1
I have both default public and private network in my topology network with a router.
I am not able to ping between my host and the instance created (IP:192.168.101.3 and floating IPs: 172.24.4.15). Host IP: 10.6.208.111
Current settings:
Public Network (public 172.24.4.0/27)
Private Network (private 192.168.101.0/24)
I also created a floating IP with the IP address 172.24.4.15
I created a router with 2 interfaces: one connects to the public and one connects to the private. I created the VM's in the private network.
How can I SSH to the instance created and ping it from my host IP?
Thank you.
I did for ocata version devstack installation with the single NIC, May be this will help.
Install the openstack with devstack ocata version
Delete the existing/default networks
2.1 Deletet all the router interfaces and then the router
2.2 Deletet all the networks
Create the networks public and private
Add the router and its interfaces to public and private network
Add the TCP and ICMP to the security Group.(Networktopology> security group> manage rules> add)
Configure the Bridge in command prompt
Note:
Make sure the details like bridge, interface, network ip to be changes as per your need.
Public netwrok : 10.0.15.0
private network: 192.168.11.0
external bridge: br-ex
interface : enp0s8
Ubuntu ip : 10.0.15.20
3 Create the network private
project:admin
project >
network topology >
create a network >
network name : private-net
enable admin state: yes
shared :no
create subnet:yes
next >
subnet name : private-net-subnet
Network Address Source : enter network address manually
Network Address : 192.168.11.0/24
IP Version : IPV4
Gateway IP : 192.168.11.1
Disable Gateway : No
next >
Enable DHCP : yes
Allocation Pools : 192.168.11.120,192.168.11.140
DNS Name Servers : 8.8.8.8
Host Routes :
3 Create the network public
admin >
networks >
Create Network >
Name : public-net
Project : demo
Provider Network Type : Flat
Physical Network : public
Enable Admin State : yes
Shared : yes
External Network : yes
public-net >
Create subnet >
Subnet Name : public-net-subnet
Network Address Source : enter network address manually
Network Address : 10.0.15.0/24
IP Version : IPV4
Gateway IP : 10.0.15.1
Disable Gateway : No
next >
Enable DHCP : yes
Allocation Pools : 10.0.15.120,10.0.15.140
DNS Name Servers : 10.0.9.10
Host Routes :
4 Adding the router and its interfaces to public and private network
admin >
networks >
Routers >
Create Router >
Router Name : router 1
Enable Admin State : yes
External Network : public-net
create Router >
project >
networks >
Routers >
router 1 >
interfaces >
add interface >
subnet : private-net
ip address :
submit >
6 Configure the Bridge in command prompt (ubuntu openstack host)
sudo ifconfig br-ex promisc up
sudo ovs-vsctl add-port br-ex enp0s8 & sudo ifconfig br-ex 10.0.15.20 netmask 255.255.255.0
systemctl restart networking.service

Aerospike Community Edition: what should I do to `aerospike.conf` to setup a cluster?

I'm trying to setup a three-node Aerospike cluster on Ubuntu 14.04. Apart from the IP address/name, each machine is identical. I installed Aerospike and the management console, per the documentation, on each machine.
I then edited the network/service and network/heartbeat sections in /etc/aerospike/aerospike.conf:
network {
service {
address any
port 3000
access-address 10.0.1.11 # 10.0.1.12 and 10.0.1.13 on the other two nodes
}
heartbeat {
mode mesh
port 3002
mesh-seed-address-port 10.0.1.11 3002
mesh-seed-address-port 10.0.1.12 3002
mesh-seed-address-port 10.0.1.13 3002
interval 150
timeout 10
}
[...]
}
When I sudo service aerospike start on each of the nodes, the service runs but it's not clustered. If I try to add another node in the management console, it informs me: "Node 10.0.1.12:3000 cannot be monitored here as it belongs to a different cluster."
Can you see what I'm doing wrong? What changes should I make to aerospike.conf, on each of the nodes, in order to setup an Aerospike cluster instead of three isolated instances?
Your configuration appears correct.
Check if you are able to open a TCP connection over ports 3001 and 3002 from each host to the rest.
nc -z -w5 <host> 3001; echo $?
nc -z -w5 <host> 3002; echo $?
If not I would first suspect firewall configuration.
Update 1:
The netcat commands returned 0 so let's try to get more info.
Run and provide the output of the following on each node:
asinfo -v service
asinfo -v services
asadm -e info
Update 2:
After inspecting the output in the gists, the asadm -e "info net" indicated that all nodes had the same Node IDs.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Fqdn Ip Client Current HB HB
. Id . . Conns Time Self Foreign
h *BB9000000000094 hadoop01.woolford.io:3000 10.0.1.11:3000 15 174464730 37129 0
Number of rows: 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Fqdn Ip Client Current HB HB
. Id . . Conns Time Self Foreign
h *BB9000000000094 hadoop03.woolford.io:3000 10.0.1.13:3000 5 174464730 37218 0
Number of rows: 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Fqdn Ip Client Current HB HB
. Id . . Conns Time Self Foreign
h *BB9000000000094 hadoop02.woolford.io:3000 10.0.1.12:3000 5 174464731 37203 0
Number of rows: 1
The Node ID is constructed with the fabric port (port 3001 in hex) followed by the MAC address in reverse byte order. Another flag was that the "HB Self" was non-zero and is expected to be zero in a mesh configuration (in a multicast configuration this will also be non-zero since the nodes will receive their own heartbeat messages).
Because all of the Node IDs are the same, this would indicate that all of the MAC address are the same (though it is possible to change the node IDs using rack aware). Heartbeats that appear to have originated from the local node (determined by hb having the same node id) are ignored.
Update 3:
The MAC addresses are all unique, which contradicts previous conclusions. A reply provided the interface name being used, em1, which is not an interface name Aerospike looks for. Aerospike looks for interfaces named either eth#, bond#, or wlan#. I assume since the name wasn't one of the expected three this caused the issue with the MAC addresses; if so I would suspect the following warning exists in the logs?
Tried eth,bond,wlan and list of all available interfaces on device.Failed to retrieve physical address with errno %d %s
For such scenarios the network-interface-name parameter may be used to instruct Aerospike which interface to use for node id generation. This parameter also determines which interface's IP address should be advertised to the client applications.
network {
service {
address any
port 3000
access-address 10.0.1.11 # 10.0.1.12 and 10.0.1.13 on the other two nodes
network-interface-name em1 # Needed for Node ID
}
Update 4:
With the 3.6.0 release, these device names will be automatically discovered. See AER-4026 in release notes.

Ssh from one local network to another through inermediary with public IP

There is one computer (A) in one local network and the other (B) in the other one. None of them have public ip addresses. Both LAN gateways are out of my control. But I have a VPS server with public IP address and both A and B are able to connect to this VPS. How can I establish an ssh tunnel from A to B using intermediary VPS?
Connect from B to vps forwarding remote port to local side (seem -R ssh option):
B# ssh -R 2222:localhost:22 vpsuser#vpshost
this will connect you to VPS host making port 2222 on server connected to B host port 22 (ssh)
Only thing left to do is to connect from A to VPS server and from it to B via 2222:
A# ssh vpsuser#vpshost
VPS# ssh -p2222 buser#localhost
B#