I have 2 servers and 3 instances of redis3 in each of them. I have a cluster-nodes directory, where I have all the data of each instance. Here it is.
cluster-nodes/
|-- 7777
| |-- db01
| | -- nodes-7777.conf
| -- redis.conf
|-- 7778
| |-- db02
| | -- nodes-7778.conf
| -- redis.conf
-- 7779
|-- db03
| -- nodes-7779.conf
-- redis.conf
Here is my config file redis.conf under the 7777 directory
pidfile /var/run/redis/redis-7777.pid
port 7777
dir /opt/redis/cluster-nodes/7777/db01/
cluster-enabled yes
cluster-config-file nodes-7777.conf
cluster-node-timeout 15000
When I try to start redis I get
./redis-trib.rb create --replicas 1 127.0.0.1:7777 127.0.0.1:7778 127.0.0.1:7779 192.168.56.41:7777 192.168.56.41:7778 192.168.56.41:7779
>>> Creating cluster
Connecting to node 127.0.0.1:7777: OK
Connecting to node 127.0.0.1:7778: OK
Connecting to node 127.0.0.1:7779: OK
Connecting to node 192.168.56.41:7777: OK
Connecting to node 192.168.56.41:7778: OK
Connecting to node 192.168.56.41:7779: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:7777
192.168.56.41:7777
127.0.0.1:7778
Adding replica 192.168.56.41:7778 to 127.0.0.1:7777
Adding replica 127.0.0.1:7779 to 192.168.56.41:7777
Adding replica 192.168.56.41:7779 to 127.0.0.1:7778
M: 209d68fae9c64855d34972f660232eb96370a669 127.0.0.1:7777
slots:0-5460 (5461 slots) master
M: 62e2b167a287b94b5154f7b9b0f226345baa81b7 127.0.0.1:7778
slots:10923-16383 (5461 slots) master
S: 36ed59deceb01788db76abc0c2f22925a27295fc 127.0.0.1:7779
replicates 2760b5fcc99c6563a7cf8deea159efb012309238
M: 2760b5fcc99c6563a7cf8deea159efb012309238 192.168.56.41:7777
slots:5461-10922 (5462 slots) master
S: 16bf95ba9cb743c2a3caecaab5c2fd5121d80557 192.168.56.41:7778
replicates 209d68fae9c64855d34972f660232eb96370a669
S: 30e7a5b4a94b5ff3a09f4809d6fd62edb2279b0e 192.168.56.41:7779
replicates 62e2b167a287b94b5154f7b9b0f226345baa81b7
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....................................................................................................................................................................................................................................................................^C./redis-trib.rb:534:in `sleep': Interrupt
from ./redis-trib.rb:534:in `wait_cluster_join'
from ./redis-trib.rb:1007:in `create_cluster_cmd'
from ./redis-trib.rb:1373:in `<main>'
Here is the output from cluster nodes on the first server
62e2b167a287b94b5154f7b9b0f226345baa81b7 127.0.0.1:7778 master - 0 1435144555558 2 connected 10923-16383
36ed59deceb01788db76abc0c2f22925a27295fc 127.0.0.1:7779 master - 0 1435144554554 3 connected
209d68fae9c64855d34972f660232eb96370a669 127.0.0.1:7777 myself,master - 0 0 1 connected 0-5460
And this is from the second
16bf95ba9cb743c2a3caecaab5c2fd5121d80557 127.0.0.1:7778 master - 0 1435144648065 5 connected
30e7a5b4a94b5ff3a09f4809d6fd62edb2279b0e 127.0.0.1:7779 master - 0 1435144647057 6 connected
2760b5fcc99c6563a7cf8deea159efb012309238 127.0.0.1:7777 myself,master - 0 0 4 connected 5461-10922
It seems that all of them are started as masters? Is there something wrong in my configs?
Thank you.
p.s. when I try the same configs and start all instances in one server, everything works fine.
The problem in my case was that I was starting the service with localhost address,
./redis-trib.rb create --replicas 1 127.0.0.1:7777 127.0.0.1:7778 127.0.0.1:7779 192.168.56.41:7777 192.168.56.41:7778 192.168.56.41:7779
In order to fix that 127.0.0.1 should be substituted with ip address of the local node, i.e.
./redis-trib.rb create --replicas 1 192.168.56.40:7777 192.168.56.40:7778 192.168.56.40:7779 192.168.56.41:7777 192.168.56.41:7778 192.168.56.41:7779
Please check the 17777 17778...port,cluster need those port to communication。
Related
this is bench result
C:\Users\LG520\Desktop> redisbench -cluster=true -a 192.168.1.61:6380,192.168.1.61:6381,192.168.1.61:6382 -c 10 -n 100 -d 1000
2020/12/22 14:43:50 Go...
2020/12/22 14:43:50 # BENCHMARK CLUSTER (192.168.1.61:6380,192.168.1.61:6381,192.168.1.61:6382, db:0)
2020/12/22 14:43:50 * Clients Number: 10, Testing Times: 100, Data Size(B): 1000
2020/12/22 14:43:50 * Total Times: 1000, Total Size(B): 1000000
2020/12/22 14:46:13 # BENCHMARK DONE
2020/12/22 14:46:13 * TIMES: 1000, DUR(s): 143.547, TPS(Hz): 6
i build a redis cluster, but redisbench result is too low;
this is cluster info
[root#SZFT-LINUX chen]# ./redis-6.0.6/src/redis-cli -c -p 6380
127.0.0.1:6380> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:2616
cluster_stats_messages_pong_sent:3260
cluster_stats_messages_sent:5876
cluster_stats_messages_ping_received:3255
cluster_stats_messages_pong_received:2616
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:5876
127.0.0.1:6380>
127.0.0.1:6380> cluster nodes
c12b3dbe5dbfe23a8bf0c180cbcdd6aaec98c4aa 192.168.1.61:6382#16382 master - 0 1608621050071 3 connected 10923-16383
3adf356189ddc44547b662b4f5f05f85f2cf016b 192.168.1.61:6385#16385 slave 8af6ca7a04368dd2cd7f40b76f3ac43fc0741812 0 1608621048057 2 connected
4a92459e43eff69aa6a0f603e13310b1a679b98d 192.168.1.61:6380#16380 myself,master - 0 1608621049000 1 connected 0-5460
72c20f23d93d87f75d78df4fa19e7cfa7a6f392e 192.168.1.61:6383#16383 slave c12b3dbe5dbfe23a8bf0c180cbcdd6aaec98c4aa 0 1608621048000 3 connected
fd16d8cd8226d3e6ee8854f642f82159c97eaa48 192.168.1.61:6384#16384 slave 4a92459e43eff69aa6a0f603e13310b1a679b98d 0 1608621047049 1 connected
8af6ca7a04368dd2cd7f40b76f3ac43fc0741812 192.168.1.61:6381#16381 master - 0 1608621049060 2 connected 5461-10922
127.0.0.1:6380>
redis version: 6.0.6
i build in docker for the first time(i thought the low TPS was due to docker ), now i build in centos 7, got the same result ;
this is one of the redis.conf, 6 in total
port 6383
#dbfilename dump.rdb
#save 300 10
save ""
appendonly yes
appendfilename appendonly.aof
# appendfsync always
appendfsync everysec
# appendfsync no
dir /home/chen/redis-hd/node6383/data
maxmemory 2G
logfile /home/chen/redis-hd/node6383/data/redis.log
protected-mode no
maxmemory-policy allkeys-lru
# bind 127.0.0.1
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
cluster-slave-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage yes
cluster-announce-ip 192.168.1.61
no-appendfsync-on-rewrite yes
i test one node redis, the tps is 2000,
why redis cluster'TPS is lower than singele node?
anybody can help me, i will be very appreciated!
I have two VMs in HyperV, both on the same virtual switch (internal), on the same subnet. I am trying to set up one as a DHCP and TFTP server for PXE boot. With Gen1 machine, it's working fine with pxelinux. Gen2 with UEFI does not unfortunately work.
DHCP & TFTP Server
IP 192.168.1.2
VLAN identification is disabled
DHCP - ISC DHCP Server running in a docker container with "host" network type with the following configuration:
set vendorclass = option vendor-class-identifier;
option pxe-system-type code 93 = unsigned integer 16;
set pxetype = option pxe-system-type;
authoritative;
default-lease-time 7200;
max-lease-time 7200;
option tftp-server-name "192.168.1.2";
option bootfile-name "efi/core.efi";
subnet 192.168.1.0 netmask 255.255.255.0 {
interface "eth0:0";
option routers 192.168.1.1;
option subnet-mask 255.255.255.0;
range 192.168.1.100 192.168.1.150;
option broadcast-address 192.168.1.255;
option domain-name-servers 8.8.8.8, 8.8.4.4;
option domain-name "ad.lholota.net";
option domain-search "ad.lholota.net";
if substring(vendorclass, 0, 9)="PXEClient" {
if pxetype=00:06 or pxetype=00:07 {
filename "efi/core.efi";
} else {
filename "pxelinux/pxelinux.0";
}
}
next-server 192.168.1.2;
}
TFTP - tftp-hpa running in a docker container on a "host" type network. I can download the efi files manually through a standard tftp client.
Booting machine
HyperV Gen2
No virtual HDD or DVD
Firmware tab has only one item in the boot sequence - network
Secure boot is disabled
VLAN identification is disabled
Network adapter pointing into the same internal switch as the first VM
Enable virtual machine queue - checked
Enable IPsec task offloading - checked, maximum number: 512
MAC Address dynamic
Enable DHCP guard - NOT checked
Enable router advertisement guard - NOT checked
Procted network - NOT checked
Mirroring mode - None
Enable device naming - NOT checked
The trouble is that the machine doesn't even get to the TFTP server because it doesn't finish the DHCP Discover-Offer-Request-Ack flow. It gets stuck on offer as shown in the dhcpdump below. The booting machine never sends the request message. Funny enough, BIOS based Gen1 HyperV machine boots without any issue so the DHCP flow works there.
Can you please give me a hint of what might be wrong?
TIME: 2018-07-11 19:49:37.641
IP: 0.0.0.0 (0:15:5d:0:50:d0) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)
OP: 1 (BOOTPREQUEST)
HTYPE: 1 (Ethernet)
HLEN: 6
HOPS: 0
XID: 8bf1c250
SECS: 0
FLAGS: 7f80
CIADDR: 0.0.0.0
YIADDR: 0.0.0.0
SIADDR: 0.0.0.0
GIADDR: 0.0.0.0
CHADDR: 00:15:5d:00:50:d0:00:00:00:00:00:00:00:00:00:00
SNAME: .
FNAME: .
OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER)
OPTION: 57 ( 2) Maximum DHCP message size 1472
OPTION: 55 ( 35) Parameter Request List 1 (Subnet mask)
2 (Time offset)
3 (Routers)
4 (Time server)
5 (Name server)
6 (DNS server)
12 (Host name)
13 (Boot file size)
15 (Domainname)
17 (Root path)
18 (Extensions path)
22 (Maximum datagram reassembly size)
23 (Default IP TTL)
28 (Broadcast address)
40 (NIS domain)
41 (NIS servers)
42 (NTP servers)
43 (Vendor specific info)
50 (Request IP address)
51 (IP address leasetime)
54 (Server identifier)
58 (T1)
59 (T2)
60 (Vendor class identifier)
66 (TFTP server name)
67 (Bootfile name)
97 (UUID/GUID)
128 (???)
129 (???)
130 (???)
131 (???)
132 (???)
133 (???)
134 (???)
135 (???)
OPTION: 97 ( 17) UUID/GUID 008c0c7ab81331a0 ...z..1.
4297445b2e41610e B.D[.Aa.
a8 .
OPTION: 94 ( 3) Client NDI 010300 ...
OPTION: 93 ( 2) Client System 0007 ..
OPTION: 60 ( 32) Vendor class identifier PXEClient:Arch:00007:UNDI:003000
---------------------------------------------------------------------------
TIME: 2018-07-11 19:49:37.641
IP: 0.0.0.0 (0:15:5d:0:50:12) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)
OP: 2 (BOOTPREPLY)
HTYPE: 1 (Ethernet)
HLEN: 6
HOPS: 0
XID: 8bf1c250
SECS: 0
FLAGS: 7f80
CIADDR: 0.0.0.0
YIADDR: 192.168.1.105
SIADDR: 192.168.1.2
GIADDR: 0.0.0.0
CHADDR: 00:15:5d:00:50:d0:00:00:00:00:00:00:00:00:00:00
SNAME: .
FNAME: efi/core.efi.
OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER)
OPTION: 51 ( 4) IP address leasetime 7200 (2h)
OPTION: 1 ( 4) Subnet mask 255.255.255.0
OPTION: 3 ( 4) Routers 192.168.1.1
OPTION: 6 ( 8) DNS server 8.8.8.8,8.8.4.4
OPTION: 15 ( 14) Domainname ad.lholota.net
OPTION: 28 ( 4) Broadcast address 192.168.1.255
I have had what i believe is the same issue when booting HyperV virtual machines on win10 2004(19041.685): gen 1 works, gen 2 times out without ever asking for the boot file.
I strongly suspect this is an issue with the GEN2 UEFI PXE implementation. Because as soon as I have at least two entries to choose from in the pxe boot menu it requests files and downloads as expected.
I run dnsmasq for tftp and DHCP and my config file below works if and only if at least one of the last two rows are uncommented. (pxe-service=x86-64_EFI and pxe-service=7 are equal)
config context: https://linuxconfig.org/how-to-configure-a-raspberry-pi-as-a-pxe-boot-server
# /etc/dnsmasq.d/03-tftpboot.conf
enable-tftp
tftp-lowercase
tftp-root=/mnt/data/netboot
pxe-prompt="Choose:"
pxe-service=x86PC,"PXELINUX (BIOS)",bios/pxelinux.0
pxe-service=x86PC,"WinPE (BIOS)",boot/pxeboot.n12
pxe-service=x86-64_EFI,"PXELINUX (EFI)",efi64/syslinux.efi
pxe-service=x86-64_EFI,"winpe (EFI)",boot/wdsmgfw.efi
#pxe-service=7,"PXELINUX (EFI-7)",efi64/syslinux.efi
I think I am experiencing the same problem when using digital rebar provisioner. Works great on Gen 1 but not on Gen 2. Have followed the same configuration as well.
Looking at the digital rebar code it seems like it should work but does not: https://github.com/digitalrebar/provision/blob/8269e1c7ff12a82854c19eccd114d064e2278211/midlayer/pxe.go#L252
I think this could be related:
https://wiki.fogproject.org/wiki/index.php/BIOS_and_UEFI_Co-Existence
https://serverfault.com/questions/739138/hyper-v-2016-gen2-vm-pxe-dhcp-timeout-wireshark-dhcp-discover-offer
Redis cluster is created with internal ip addresses.
M: 24ff344338e4abb4f7f2e888ee9d57843fc46e62 10.0.9.19:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 9aa005c2d914db12a394af0cb8a0d8e218730099 10.0.9.15:6379
slots: (0 slots) slave
replicates f67bbf56d98c2bff9eba343356e2b52bd5e59b12
S: aedd33304e59cbe7091fb36befdb230f3956f03e 10.0.9.16:6379
slots: (0 slots) slave
replicates 9cd64e70f9fd7fffb44b79186e09e3872ea3ebb4
M: f67bbf56d98c2bff9eba343356e2b52bd5e59b12 10.0.9.20:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 2af640e7072b255786b47337e7ca171e0506f5f9 10.0.9.14:6379
slots: (0 slots) slave
replicates 24ff344338e4abb4f7f2e888ee9d57843fc46e62
M: 9cd64e70f9fd7fffb44b79186e09e3872ea3ebb4 10.0.9.18:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
There is only one public ip for external connection, 54.174.xxx.xxx.
After connecting to the cluster, there is problem for redirecting.
52.71.xxx.xxx:6379> lrange mylist 0 -1
-> Redirected to slot [5282] located at 10.0.9.18:6379
Could not connect to Redis at 10.0.9.18:6379: Connection timed out
Could not connect to Redis at 10.0.9.18:6379: Connection timed out
(254.54s)
not connected>
It look like redis is asking the client to connect to another node for the data. Is it possible to let the current connected node to fetch data on behave? or Any another solution?
As least, I know Cassandra will fetch data from another node to return instead of redirecting.
I am trying to run spring xd in distruibuted mode on 2 Ubuntu VMs. My goal is to deploy a module on one of the VMs running spring xd and make it visible to the container on the other VM(hostname: container1). On the main VM(hostname: xd-admin) I am running redis-sentinel with this configuration in servers.yml
spring:
redis:
port: 6379
host: 127.0.0.1
sentinel:
master: 127.0.0.1:26379
nodes: 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
In container1 I have the following in servers.yml
spring:
redis:
port: 6379
host: 127.0.0.1
sentinel:
master: xd-admin:26379
nodes: 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
zk:
namespace: xd
client:
connect: xd-admin:2181
sessionTimeout: 60000
connectionTimeout: 30000
initialRetryWait: 1000
retryMaxAttempts: 3
When I run xd-container on the xd-admin host I get
Caused by: redis.clients.jedis.exceptions.JedisException: Can connect to sentinel, but 127.0.0.1:26379 seems to be not monitored...
at redis.clients.jedis.JedisSentinelPool.initSentinels(JedisSentinelPool.java:150) ~[jedis-2.6.2.jar:na]
at redis.clients.jedis.JedisSentinelPool.<init>(JedisSentinelPool.java:69) ~[jedis-2.6.2.jar:na]
at redis.clients.jedis.JedisSentinelPool.<init>(JedisSentinelPool.java:47) ~[jedis-2.6.2.jar:na]
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.createRedisSentinelPool(JedisConnectionFactory.java:215) ~[spring-data-redis-1.5.0.RELEASE.jar:1.5.0.RELEASE]
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.createPool(JedisConnectionFactory.java:202) ~[spring-data-redis-1.5.0.RELEASE.jar:1.5.0.RELEASE]
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.afterPropertiesSet(JedisConnectionFactory.java:195) ~[spring-data-redis-1.5.0.RELEASE.jar:1.5.0.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1633) ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1570) ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
... 30 common frames omitted
When I run xd-container on the container1 host I get
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: All sentinels down, cannot determine where is 192.168.33.10:26379 master is running...
at redis.clients.jedis.JedisSentinelPool.initSentinels(JedisSentinelPool.java:153)
at redis.clients.jedis.JedisSentinelPool.<init>(JedisSentinelPool.java:69)
at redis.clients.jedis.JedisSentinelPool.<init>(JedisSentinelPool.java:47)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.createRedisSentinelPool(JedisConnectionFactory.java:215)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.createPool(JedisConnectionFactory.java:202)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.afterPropertiesSet(JedisConnectionFactory.java:195)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1633)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1570)
... 31 more
I have zookeeper and rabbit-mq running on xd-admin. I have redis running on container1. I know xd-admin is accessible from container1 because I have apache2 installed on xd-admin and I receive a response when I run curl xd-admin from container1. How do I configure redis and/or my servers.yml properly for my containers to communicate?
In my servers.yml files I commented redis.sentinel and its children and the exception went away.
I want setup redis cluster with 6 nodes (node1, node2, node3, node4, node5, node6), which has 3 masters and 3 slaves. Each node has this configuration file
redis.conf
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 10000
appendonly yes
I get error when create the cluster. Create command:
redis-trib.rb create --replicas 1 node1:6379 node2:6379 node3:6379 node4:6379 node5:6379 node6:6379
Error:
>>> Creating cluster
Connecting to node node1:6379: OK
Connecting to node node2:6379: OK
Connecting to node node3:6379: OK
Connecting to node node4:6379: OK
Connecting to node node5:6379: OK
Connecting to node node6:6379: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
node6:6379
node5:6379
node4:6379
Adding replica node3:6379 to node6:6379
Adding replica node2:6379 to node5:6379
Adding replica node1:6379 to node4:6379
S: 1f13819038ba983bb8355f54cb8cec19d2b29e01 node1:6379
replicates 534745088c8b403b81d7e48a22d2e317fb420a38
S: 711461862393664b46d73db6561631f40de29561 node2:6379
replicates f503fe6fd52c73e446267795111ae6ea95495829
S: 204fa4e23b08e2c6ad80b0aca271fc380bc6885d node3:6379
replicates fe6a8e88afdb2796c09fcc873b37ba90c2ba6d79
M: 534745088c8b403b81d7e48a22d2e317fb420a38 node4:6379
slots:10923-16383 (5461 slots) master
M: f503fe6fd52c73e446267795111ae6ea95495829 node5:6379
slots:5461-10922 (5462 slots) master
M: fe6a8e88afdb2796c09fcc873b37ba90c2ba6d79 node6:6379
slots:0-5460,6918 (5462 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
/var/lib/gems/1.8/gems/redis-3.2.2/lib/redis/client.rb:114:in `call': ERR Slot 16011 is already busy (Redis::CommandError)
from /var/lib/gems/1.8/gems/redis-3.2.2/lib/redis.rb:2646:in `method_missing'
from /var/lib/gems/1.8/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'
from /usr/lib/ruby/1.8/monitor.rb:242:in `mon_synchronize'
from /var/lib/gems/1.8/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'
from /var/lib/gems/1.8/gems/redis-3.2.2/lib/redis.rb:2645:in `method_missing'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:205:in `flush_node_config'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:667:in `flush_nodes_config'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:666:in `each'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:666:in `flush_nodes_config'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:1007:in `create_cluster_cmd'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:1388:in `send'
from /home/hadoop/projects/ramin/redis-3.0.5/src/redis-trib.rb:1388
I also did these, but also got same error message
use ip address instead of hostname
remove nodes.conf in each nodes
How said #thepirat000 (in all nodes did FLUSHALL and then CLUSTER RESET SOFT) i also changed hostname to ip address