Error: A cluster exists but does not match the provided --hosts - virtual-machine

Guest is Linux Mint on Virtual box 6.1
On linux:
ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::718f:8339:b102:8b55 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:ef:a4:d0 txqueuelen 1000 (Ethernet)
RX packets 264856 bytes 281867129 (281.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 115652 bytes 25798069 (25.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.56.102 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::8c70:e3a1:3e3a:34b1 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:af:71:99 txqueuelen 1000 (Ethernet)
RX packets 1036 bytes 164039 (164.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6550 bytes 6898595 (6.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3460398 bytes 1310463468 (1.3 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3460398 bytes 1310463468 (1.3 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
But when I try to install Vertica DB:
echo "NETWORKING=yes" >> /etc/sysconfig/network
export SHORT_HOSTNAME=10.0.2.15
expect install_image/vertica.expect
I get error:
Mapping hostnames in --hosts (-s) to addresses...
Error: A cluster exists but does not match the provided --hosts
192.168.56.102 in --hosts but not in cluster.
10.0.2.15 in cluster but not in --hosts.
Hint: omit --hosts for existing clusters. To change a cluster use --add-hosts or --remove-hosts.
Installation FAILED with errors.

Related

SSH Connection refused in virtual machine

I am running a virtual machine Phoenix, inside QEMU from exploit.education in Kali Linux. It is pre-installed with the newest version of OpenSSH; however, I get an error whenever I try to connect to the machine with SSH.
I used the command ip a s in my Kali machine. It displayed the following results:
$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:9a:60:f4 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.12/24 brd 192.168.10.255 scope global dynamic noprefixroute eth0
valid_lft 85248sec preferred_lft 85248sec
inet6 fe80::a00:27ff:fe9a:60f4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
inet 172.16.19.1/24 brd 172.16.19.255 scope global vmnet1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fec0:1/64 scope link
valid_lft forever preferred_lft forever
4: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
inet 192.168.43.1/24 brd 192.168.43.255 scope global vmnet8
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fec0:8/64 scope link
valid_lft forever preferred_lft forever
I ran the following commands in NMAP to determine the IP:
$ nmap 172.16.19/24
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-25 13:19 EDT
Nmap scan report for 172.16.19.1 (172.16.19.1)
Host is up (0.00022s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
902/tcp open iss-realsecure
$ nmap 192.168.43.1/24
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-25 13:19 EDT
Nmap scan report for 192.168.43.1 (192.168.43.1)
Host is up (0.00033s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
902/tcp open iss-realsecure
The NMAP results indicate that 172.16.19.1 and 192.168.43.1 are up and running, but oddly enough, don't show port 22; I still tried to connect to it with SSH.
$ ssh user#172.16.19.1
ssh: connect to host 172.16.19.1 port 22: Connection refused
$ ssh user#192.168.43.1
ssh: connect to host: 192.168.43.1 port 22: Connection refused
I also checked whether the virtual machine was listening on port 22, and it seems like it is:
$ netstat -latun | grep :::22
tcp6 0 0 :::22 :::* LISTEN -
Is there something I'm doing wrong? What can I do to fix this problem?
It was running on localhost, and since port 22 is forwarded through port 2222 in localhost, you have to use the command: ssh user#localhost -p 2222.

problem with testpmd on dpdk and ovs in ubuntu 18.04

i have a X520-SR2 10G Network Card, i gonna use that to create 2 virtual interfaces with OpenvSwitch that compiled with dpdk (installed from repository of ubuntu 18.04) and test this virtual interfaces with testpmd, i do following jobs :
Create Bridge
$ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
bind dpdk ports
$ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:01:00.0 ofport_request=1
$ ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:01:00.1 ofport_request=2
create dpdkvhostuser ports
$ ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser ofport_request=3
$ ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser ofport_request=4
define flow directions
# clear all directions
$ ovs-ofctl del-flows br0
Add new flow directions
$ ovs-ofctl add-flow br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:4
$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:3
Dump flow directions
$ ovs-ofctl dump-flows br0
cookie=0x0, duration=851.504s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser0 actions=output:dpdkvhostuser1
cookie=0x0, duration=851.500s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser1 actions=output:dpdkvhostuser0
now i run testpmd:
$ testpmd -c 0x3 -n 4 --socket-mem 512,512 --proc-type auto --file-prefix testpmd --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/dpdkvhostuser0 --vdev=virtio_user1,path=/var/run/openvswitch/dpdkvhostuser1 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan
EAL: Detected 32 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=155456, size=2176, socket=1
Configuring Port 0 (socket 0)
Port 0: DA:17:DC:5E:B0:6F
Configuring Port 1 (socket 0)
Port 1: 3A:74:CF:43:1C:85
Checking link statuses...
Done
testpmd> start tx_first
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=64
nb forwarding cores=1 - nb forwarding ports=2
port 0:
CRC stripping enabled
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0xf00
port 1:
CRC stripping enabled
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0xf00
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 64 TX-dropped: 0 TX-total: 64
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 64 TX-dropped: 0 TX-total: 64
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 128 TX-dropped: 0 TX-total: 128
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
testpmd>
version of softwares:
OS: Ubuntu 18.04
Linux Kernel: 4.15
OVS: 2.9
DPDK: 17.11.3
what should i do now ??
where is the problem from?
finally catch the problem , The problem is size of socket memory allocation, i change --socket-mem value to 1024,1024 (1024M for each numa node) and create packets with pktgen (as same using --socket-mem 1024,1024).Everything works fine.

Making UDP broadcast work with wifi router

I'd like to test out UDP broadcast on a very simple network: an old wifi router (WRT54GS) that's not connected to the internet at all, an android tablet, and my macbook:
[Tablet] <\/\/\/\/\/> [Wifi Router] <\/\/\/\/\/> [Macbook]
where the wavy lines indicate wireless connections.
The Macbook has IP address 192.168.1.101, the tablet has IP address 192.168.1.102. The router is 192.168.1.1.
To avoid too much low-level detail, I wanted to use netcat to do my testing. I decided to use port 11011 because it was easy to type.
As a first step, I thought I'd try just making this work from the macbook back to itself. In two terminal windows, I ran these programs
Window 1: % nc -ul 11011
which I started up first, and then:
Window 2: % echo 'foo' | nc -v -u 255.255.255.255 11011
Nothing showed up in Window 1. The result in Window 2 was this:
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif (null)
src 192.168.1.2 port 61985
dst 255.255.255.255 port 11011
rank info not available
I'm fairly certain I'm missing something obvious here. Can someone familiar with nc spot my obvious error?
This is a multi-part answer, gleaned from other SO and SuperUser answers and a bit of guesswork.
Mac-to-mac communication via UDP broadcast over wifi
The first thing is that the mac version of netcat (nc) as of Oct 2018 doesn't support broadcast, so you have to switch to "socat", which is far more general and powerful in what it can send. As for the listening side, what worked for me, eventually, was
Terminal 1: % nc -l -u 11011
What about the sending side? Well, it turns out I needed more information. For instance, trying this with the localhost doesn't work at all, because that particular "interface" (gosh, I hate the overloading of words in CS; as a mathematician, I'd hope that CS people might have learned from our experience what a bad idea this is...) doesn't support broadcast. And how did I learn that? Via ifconfig, a tool that shows how your network is configured. In my case, the output was this:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 98:01:a7:8a:6b:35
inet 192.168.1.101 netmask 0xffffff00 broadcast 192.168.1.255
media: autoselect
status: active
en1: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
options=60<TSO4,TSO6>
ether 4a:00:05:f3:ac:30
media: autoselect <full-duplex>
status: inactive
en2: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
options=60<TSO4,TSO6>
ether 4a:00:05:f3:ac:31
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 4a:00:05:f3:ac:30
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 5 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 6 priority 0 path cost 0
media: <unknown type>
status: inactive
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 0a:01:a7:8a:6b:35
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 7e:00:76:6d:5c:09
inet6 fe80::7c00:76ff:fe6d:5c09%awdl0 prefixlen 64 scopeid 0x9
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::773a:6d9e:1d47:7502%utun0 prefixlen 64 scopeid 0xa
nd6 options=201<PERFORMNUD,DAD>
most of which means nothing to me. But look at "en0", the ethernet connection to the wireless network (192.168). The data there really tells you something. The flags tell you that it supports broadcast and multicast. Two lines late, the word broadcast appears again, followed by 192.168.1.255, which suggested to me that this might be the right address to which to send broadcast packets.
With that in mind, I tried this:
Terminal 2: % echo -n "TEST" | socat - udp-datagram:192.168.1.255:11011,broadcast
with the result that in Terminal 1, the word TEST appeared!
When I retyped the same command in Terminal 2, nothing more appeared in Terminal 1; it seems that the "listen" is listening for a single message, for reasons I do not understand. But hey, at least it's getting me somewhere!
Mac to tablet communication
First, on the tablet, I tried to mimic the listening side of the mac version above. The termux version of nc didn't support the -u flag, so I had to do something else. I decided to use socat. As a first step, I got it working mac-to-mac (via the wifi router of course). It turns out that to listen for UDP packets, you have to use udp-listen rather than udp-datagram, but otherwise it was pretty simple. In the end, it looked like this:
Terminal 1: % socat udp-listen:11011 -
meaning "listen for stuff on port 11011 and copy to standard output", and
Terminal 2: % echo -n "TEST" | socat - udp-datagram:192.168.1.255:11011,broadcast
Together, this got data from Terminal 2 to Terminal 1.
Then I tried it on the tablet. As I mentioned, nc on the tablet was feeble. But socat was missing entirely.
I tried it, found it wasn't installed, and installed it.
Once I'd done that, on the Tablet I typed
Tablet: % socat udp-listen:11011 -
and on the mac, in Terminal 2, I once again typed
Terminal 2: echo -n "TEST" | socat - udp-datagram:192.168.1.255:11011,broadcast
and sure enough, the word TEST appeared on the tablet!
Even better, by reading the docs I found I could use
socat udp:recv:11011 -
which not only listens, but continues to listen, and hence will report multiple UDP packets, one after another. (udp-listen, by contrast, seems to wait for one packet and then try to communicate back with the sender of that packet, which isn't what I wanted at all.)

Archlinux netctl configuration as openvz container

I'm sorry, but too bad in network manual configuration for systemd and archlinux.
Can you please help me telling me how I can configure netctl to get at the end
[root#test etc]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.1/32 scope host venet0
inet 192.168.0.3/32 brd 192.168.0.3 scope global venet0:0
for a correct configuration of my openvz container?
Today I have
[root#02-Lab ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP> mtu 1500 qdisc noop state DOWN
link/void
Thank you a lot in advance for your help!

ssh-keyscan fails for ipv6 addresses

I can't get ssh-keyscan to work for ipv6 addresses. Can someone help me?
$ ssh-keyscan -6v -t rsa FE80:0000:021B:21FF:FEDA:62AD
getaddrinfo FE80:0000:021B:21FF:FEDA:62AD: Name or service not known
$ ssh-keyscan -6v -t rsa [FE80:0000:021B:21FF:FEDA:62AD]
getaddrinfo [FE80:0000:021B:21FF:FEDA:62AD]: Name or service not known
but this works:
$ ping6 -I bond0 fe80::21b:21ff:feda:62ad
PING fe80::21b:21ff:feda:62ad(fe80::21b:21ff:feda:62ad) from fe80::21b:21ff:feda:64a9 bond0: 56 data bytes
64 bytes from fe80::21b:21ff:feda:62ad: icmp_seq=1 ttl=64 time=0.571 ms
64 bytes from fe80::21b:21ff:feda:62ad: icmp_seq=2 ttl=64 time=0.165 ms
64 bytes from fe80::21b:21ff:feda:62ad: icmp_seq=3 ttl=64 time=0.145 ms
^C
--- fe80::21b:21ff:feda:62ad ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2206ms
rtt min/avg/max/mdev = 0.145/0.293/0.571/0.197 ms
You specified a link-local IPv6 address but forgot the scope. Add the scope ID to it.
You also are missing some octets in the address as you originally gave it.
Correct both of these problems:
ssh-keyscan -6v -t rsa FE80::021B:21FF:FEDA:62AD%bond0