OpenStack - How to add multiples Floating IP in the same port - cpanel

How can I to add two Floating IP(Public) in the same port(interface)? It's Possible?
Example(Port - Eth0):
eth0 - 192.168.1.10 - Public IP 01
eth0:1 - 192.168.1.11 - Public IP 02
** Currently, I can do this only if I add another port(eth1), but it's not resolve my problem because my mailserver(cpanel) accepts adding a second IP only if it is associated with the main interface(eth0).

Yes, it's possible to add the N number of IPS.
You have to assign fixed IP with exiting primary interface
First, get the Primary IP details ID in order to do that
Here my example, my Primary Instance IP is 20.20.2.10
source project that belong to the instance
source exampleproject
To get the ID of the Primary IP 20.20.2.10
OpenStack port list | grep 20.20.2.10
| 1fb8b47d-6eea-511f-8bab-xxxxxxxxxxxxxdd | | fa:xx:3e:f7:D5:43 | ip_address='20.20.2.10', subnet_id='44bd275f-desa3-4173-b470-9928ccsdfsddf4' | ACTIVE |
Now add the IPS as you wish, Here am I adding 20.20.2.11
openstack port set --fixed-ip subnet=44bd275f-desa3-4173-b470-9928ccsdfsddf4,ip-address=20.20.2.11 1fb8b47d-6eea-511f-8bab-xxxxxxxxxxxxxdd
Ensure to check it on OpenStack dashboard static IPs are added
Now Login into the VM, to add eth0:1 or eth0:0 Alias IPS in **/etc/sysconfig/network-scripts/ifcfg-eth0:1**
DEVICE=eth0:1
BOOTPROTO="static"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR=20.20.2.11
GATEWAY=20.20.2.1
NETMASK=255.255.255.0
DNS1=1.1.1.1
DNS2=8.8.8.8
Save it and UP the interface alone
ifup eth0:1
To add multiple IPs below script to used
#!/bin/bash
for octet in {11..21};do
openstack port set --fixed-ip subnet=44bd275f-desa3-4173-b470-9928ccsdfsddf4,ip-address=20.20.2.$octet 1fb8b47d-6eea-511f-8bab-xxxxxxxxxxxxxdd
echo $octet
octet=`expr $octet + 1`
done

Related

VSomeip 1 clien and 2 hosts

I Have the following configuration: a linux with 2 eth interfaces, and 2 external devices.
The ip for one external device is 192.168.10.1 and my eth0 ip is 192.168.10.2(communication works), the 2nd device ip is 192.168.20.1 and my eth1 is 192.168.20.2(here works too). The external devices are vsomeip services and I try to connect to booth of them(only 1 works). Is there any configuration that I can do so I can have all the services working in one app? So far I tried to add booth my ip addresses in client.json:
"unicast" : "192.168.10.2",
"unicast" : "192.168.20.2",

Is there a way to use a previously specified ssh-config entry in specifying `Hostname`?

edit: I have seen cases where this works and cases where this doe not and I'm not sure I follow when/why it does.
Suppose I have a complicated enough entry where I specify multiple parameters to get to thathost:
Host thathost
ControlMaster auto
ServerAliveInterval 8
ConnectTimeout 8
Hostname 192.168.5.1
User mememe
ProxyJump thejumpbox
I want to re-use this definition in creating additional entries that provide different functionality by adding or overriding some configs. Specifically specifying an alternate port (no, I don't want it on the command-line).
Ideally I'd be able to do something like
Host theirhost
Hostname thathost
User themthem
or
Host my-remote-serialport
Hostname thathost
RequestTTY yes
RemoteCommand some-script-that-launches-kermit-on-a-specific-dev-tty
or
Host my-remote-serialport
Hostname thathost
Port 3004
I'm strictly looking to specify one host in terms of another existing one, I'm not looking to modify my Host entries to match some pattern "tricks".
Obviously I can utilize ProxyCommand ssh -q nc thathost... or ProxyJump thathost +Hostname localhost followed by all the other overrides (well for port override would pass that to nc) but that's both ugly and wasteful (an extra session) - please don't answer with that.
For me this has been the missing feature of ssh-config, but maybe I did not look hard enough.
It can't be used in the way you asked, like reuse a hostname definition, but the provided solution of ssh can solve much more problems.
A host rule can span multiple hosts.
Host thathost theirhost my-remote-serialport
ControlMaster auto
ServerAliveInterval 8
ConnectTimeout 8
Hostname 192.168.5.1
User mememe
ProxyJump thejumpbox
But obviously this doesn't solve your problem with modifying some properties.
The trick is, that the ssh config uses the first wins strategy for properties.
In your case you just has to add the modifications in front of the main config
Host theirhost
Hostname thathost
User themthem
Host my-remote-serialport
Hostname thathost
Port 3004
Host thathost theirhost my-remote-serialport
ControlMaster auto
ServerAliveInterval 8
ConnectTimeout 8
Hostname 192.168.5.1
User mememe
ProxyJump thejumpbox
theirhost is defined at two places, the properties Hostname and User are taken from the first definition, all other properties from the second definition.
The host part also accepts wildcards, example for a jumpbox with multiple reverse ssh endpoints:
HOST my_1
port 2001
HOST my_2
port 2002
HOST my_3
port 2003
HOST my_*
user pi
hostname localhost
ProxyJump thejumpbox
There's no way to reference a different block; however, you can include a specific configuration file that has a global configuration. For example, create a file meant for use by thathost, theirhost, and my-remote-serialport. Let's just call it foo-config.
ControlMaster auto
ServerAliveInterval 8
ConnectTimeout 8
Hostname 192.168.5.1
User mememe
ProxyJump thejumpbox
Then you can use the Include directive to read this in as needed:
Host theirhost
Include foo-config
User themthem
Host my-remote-serialport
Include foo-config
RequestTTY yes
RemoteCommand some-script-that-launches-kermit-on-a-specific-dev-tty
Host my-remote-serialport
Include foo-config
Port 3004
However, I suspect this approach is rarely necessary, and the approach given by jeb will probably be sufficient in most cases.

cannot ssh to Google VM Engine

at the first day when i created the instance, i was able to SSH no problem, but after yesterday, i just couldnt connect to my instance. when i checked the console i get something like this
Nov 5 15:30:49 my-app kernel: [79738.555434] [UFW BLOCK] IN=ens4 OUT= MAC=42:01:0a:94:00:02:42:01:0a:94:00:01:08:00 SRC=71.15.27.115 DST=10.121.0.7 LEN=60 TOS=0x00 PREC=0x00 TTL=50 ID=38049 PROTO=TCP SPT=37344 DPT=22 WINDOW=60720 RES=0x00 SYN URGP=0
i figured its a firewall issue, but my firewall rule seems okay (assuming i did not change anything since first i created the instance). i wonder what else could be the problem? here's my fw config
default-allow-http
http-server
IP ranges: 0.0.0.0/0
tcp:80
Allow
1000
default
default-allow-https
https-server
IP ranges: 0.0.0.0/0
tcp:443
Allow
1000
default
default-beego-http
http-server
IP ranges: 0.0.0.0/0
tcp:8080
Allow
1000
default
default-jenkins-app
http-server
IP ranges: 0.0.0.0/0
tcp:8989
Allow
1000
default
default-allow-icmp
Apply to all
IP ranges: 0.0.0.0/0
icmp
Allow
65534
default
default-allow-internal
Apply to all
IP ranges: 10.128.0.0/9
tcp:0-65535, udp:0-65535, 1 more
Allow
65534
default
default-allow-rdp
Apply to all
IP ranges: 0.0.0.0/0
tcp:3389
Allow
65534
default
default-allow-ssh
Apply to all
IP ranges: 0.0.0.0/0
tcp:22
Allow
65534
default
Looking at the output you’ve provided following your attempt to SSH into your instance, it looks like you’re being blocked by UFW (Uncomplicated Firewall) which is installed/enabled on the actual instance, rather than the GCP project wide firewall rules you have set (these look okay).
In order to SSH into your VM you will need to open port 22 in UFW on the instance. There are a couple of possible methods that will allow you to do this.
Firstly, see Google Compute Engine - alternative log in to VM instance if ssh port is disabled , specifically the answer by Adrián which explains how to open port 22 using a startup script. This method requires you to reboot your instance before the firewall rule is applied.
Another method which doesn’t require a reboot of the machine makes use of the Serial Console. However, in order to use this method a password for the VM is required. This method is therefore only possible if you previously set a password on the VM (before losing access).
To connect via the Serial Console the following metadata must be added, either to the instance you are trying to connect to, or to the entire project:
serial-port-enable=1
You can apply the metadata to a specific instance like so:
gcloud compute instances add-metadata [INSTANCE_NAME] \
--metadata=serial-port-enable=1
Or alternatively, to the entire project by running:
gcloud compute project-info add-metadata --metadata=serial-port-enable=1
After setting this metadata you can attempt to connect to the instance via the Serial Console by running the following command from the Cloud Shell:
gcloud compute connect-to-serial-port [INSTANCE_NAME]
When you have accessed the instance you will be able to manage the UFW rules. To open port 22 you can run:
sudo /usr/sbin/ufw allow 22/tcp
Once UFW port 22 is open, you should then be able to SSH into your instance from Cloud Shell or from the Console.
There is some additional info about connecting to instances via the Serial Console here:
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console

How to select the private key used by a remote machine to create the known_hosts entry given to my local server

Passwordless ssh has two components, user authentication and server authentication. My user authentication is fine. I have created a public private key pair and place my public key in the authorized_keys file. My question is about the public key my local machine obtains from the remote machine which is used to authenticate the remote machine I'm connecting to:
How do I select the private key used by the remote server that goes into my local server's known_hosts?
I am constantly creating and deleting remove VMs on a cloud provider on demand to save money. Unfortunately, the new VM which replaces the delete one has generated a new private-public key pair used in the known_hosts
I do not want to have to manually type ssh-keygen -R <host> for each host. I thought the easiest would be if I have a hardcoded private key on the remote server already.
Please note this is related to previous public-private key questions like ssh remote host identification has changed , but is not duplicated! I know that you can manually fix the issue with ssh-keygen -R <host>. I am looking for a more automatic approach.
Diagram:
-------------------- -----------------------
| My machine | | Remote Machine |
| - - - - - - - - -| | - - - - - - - - - - |
| Host Public Key |<---host-authentication---| ** Host Private Key |
| (known_hosts) | | |
| - - - - - - - - -| | - - - - - - - - - - |
| User Private Key |----user-authentication-->| User Public Key |
| | | (authorized_hosts) |
-------------------- -----------------------
** : How do I change this part?
META: this is really SysOps not programming and probably belongs on serverfault or superuser, or maybe unix.SX.
You don't say which SSH server(s) your VMs are using, although I expect cloud providers probably use OpenSSH to avoid possible license issues. IF this guess is correct sshd normally has its configuration and key files in /etc/ssh although this can be changed. See the man page on any system or on the web under FILES, duplicated here per SO convention:
/etc/ssh/ssh_host_dsa_key
/etc/ssh/ssh_host_ecdsa_key
/etc/ssh/ssh_host_ed25519_key
/etc/ssh/ssh_host_rsa_key
These files contain the private parts of the host keys. These files should only be owned by root, readable only by root, and not accessible to others. Note that sshd does not start if these files are group/world-accessible.
/etc/ssh/ssh_host_dsa_key.pub
/etc/ssh/ssh_host_ecdsa_key.pub
/etc/ssh/ssh_host_ed25519_key.pub
/etc/ssh/ssh_host_rsa_key.pub
These files contain the public parts of the host keys. These files should be world-readable but writable only by root. Their contents should match the respective private parts. These files are not really used for anything; they are provided for the convenience of the user so their contents can be copied to known hosts files. These files are created using ssh-keygen(1).
A host can have up to four keypairs for protocol v2: RSA, DSA, ECDSA, and ED25519. Very old versions only support RSA and DSA. Oldish versions don't support ED25519. Newish versions don't use DSA by default but do support it. The code still supports protocol v1 with ssh_host_key[.pub] (no algorithm in name) but v1 is way obsolete and broken and you shouldn't use it.
The man page doesn't say so that I can see, but in practice for some years now OpenSSH server usually autogenerates host keys if they are missing, either in code or in the package install/startup scripts, so just deleting keytypes you don't want may not work. Your client may be able to control which of the supported pubkey algorithms the host uses; for fairly recent versions of OpenSSH see the man page for ssh_config under HostKeyAlgorithms. Otherwise either upload all supported key types, or upload the one(s) you want and tweak the sshd_config file to not use the others or create invalid but unwritable and undeletable files for the others so sshd can't use them.
ASIDE: I would reverse your diagram. Host auth always occurs first and in practice is always pubkey (the standard has other options but they aren'[t used); client auth usually occurs second, but can be skipped, and can be either pubkey or password.
I ended up hacking my scripts with (which is insecure):
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null <hostname>
-o StrictHostKeyChecking=no : will accept connect to a new host
-o UserKnownHostsFile=/dev/null : saves new hosts to /dev/null so every time you ssh, it's a new host
This was originally suggested in https://askubuntu.com/questions/87449/how-to-disable-strict-host-key-checking-in-ssh to solve a similar problem.

Aerospike Community Edition: what should I do to `aerospike.conf` to setup a cluster?

I'm trying to setup a three-node Aerospike cluster on Ubuntu 14.04. Apart from the IP address/name, each machine is identical. I installed Aerospike and the management console, per the documentation, on each machine.
I then edited the network/service and network/heartbeat sections in /etc/aerospike/aerospike.conf:
network {
service {
address any
port 3000
access-address 10.0.1.11 # 10.0.1.12 and 10.0.1.13 on the other two nodes
}
heartbeat {
mode mesh
port 3002
mesh-seed-address-port 10.0.1.11 3002
mesh-seed-address-port 10.0.1.12 3002
mesh-seed-address-port 10.0.1.13 3002
interval 150
timeout 10
}
[...]
}
When I sudo service aerospike start on each of the nodes, the service runs but it's not clustered. If I try to add another node in the management console, it informs me: "Node 10.0.1.12:3000 cannot be monitored here as it belongs to a different cluster."
Can you see what I'm doing wrong? What changes should I make to aerospike.conf, on each of the nodes, in order to setup an Aerospike cluster instead of three isolated instances?
Your configuration appears correct.
Check if you are able to open a TCP connection over ports 3001 and 3002 from each host to the rest.
nc -z -w5 <host> 3001; echo $?
nc -z -w5 <host> 3002; echo $?
If not I would first suspect firewall configuration.
Update 1:
The netcat commands returned 0 so let's try to get more info.
Run and provide the output of the following on each node:
asinfo -v service
asinfo -v services
asadm -e info
Update 2:
After inspecting the output in the gists, the asadm -e "info net" indicated that all nodes had the same Node IDs.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Fqdn Ip Client Current HB HB
. Id . . Conns Time Self Foreign
h *BB9000000000094 hadoop01.woolford.io:3000 10.0.1.11:3000 15 174464730 37129 0
Number of rows: 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Fqdn Ip Client Current HB HB
. Id . . Conns Time Self Foreign
h *BB9000000000094 hadoop03.woolford.io:3000 10.0.1.13:3000 5 174464730 37218 0
Number of rows: 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Fqdn Ip Client Current HB HB
. Id . . Conns Time Self Foreign
h *BB9000000000094 hadoop02.woolford.io:3000 10.0.1.12:3000 5 174464731 37203 0
Number of rows: 1
The Node ID is constructed with the fabric port (port 3001 in hex) followed by the MAC address in reverse byte order. Another flag was that the "HB Self" was non-zero and is expected to be zero in a mesh configuration (in a multicast configuration this will also be non-zero since the nodes will receive their own heartbeat messages).
Because all of the Node IDs are the same, this would indicate that all of the MAC address are the same (though it is possible to change the node IDs using rack aware). Heartbeats that appear to have originated from the local node (determined by hb having the same node id) are ignored.
Update 3:
The MAC addresses are all unique, which contradicts previous conclusions. A reply provided the interface name being used, em1, which is not an interface name Aerospike looks for. Aerospike looks for interfaces named either eth#, bond#, or wlan#. I assume since the name wasn't one of the expected three this caused the issue with the MAC addresses; if so I would suspect the following warning exists in the logs?
Tried eth,bond,wlan and list of all available interfaces on device.Failed to retrieve physical address with errno %d %s
For such scenarios the network-interface-name parameter may be used to instruct Aerospike which interface to use for node id generation. This parameter also determines which interface's IP address should be advertised to the client applications.
network {
service {
address any
port 3000
access-address 10.0.1.11 # 10.0.1.12 and 10.0.1.13 on the other two nodes
network-interface-name em1 # Needed for Node ID
}
Update 4:
With the 3.6.0 release, these device names will be automatically discovered. See AER-4026 in release notes.