I am building a nodejs app that connects to redis. I have this working with my local instance of redis. Now, I am using ioredis to connect from my nodejs app to my redis cluster in k8s in AWS. Here is what I have.
const Redis = require("ioredis");
this.redis = new Redis({
port: 6379, // Redis port
host: rhost, // Redis host
password: password
});
this.redis.on('connect', () => {
console.log('connected to redis')
})
It seems like I successfully connect to my cluster as the message connected to redis prints out in the log. However, every time I try to use my redis object I get a MOVED error:
UnhandledPromiseRejectionWarning: ReplyError: MOVED 5011 <ip address>:6379
at parseError (/node_modules/ioredis/node_modules/redis-parser/lib/parser.js:179:12)
at parseType (/node_modules/ioredis/node_modules/redis-parser/lib/parser.js:302:14)
The connection works on my local. However, in AWS it does not. I tried swapping using the Redis.Cluster object instead of Redis, but after I deploy the app, the application hangs and the connection event never fires. The close and reconnecting events seem to be looping infinitely.
From my understanding, this is a problem with redirecting between nodes in the cluster. Perhaps it's a problem with the master/slave configuration. Is the error a configuration issue in AWS? Do I need to use the Redis.Cluster object instead of the plain Redis instance? What is the best way to fix the MOVED error?
The MOVED error is caused by using the Redis client directly and the configuration endpoint of ElastiCache (Redis Cluster Mode). This can be solved by using new Redis.Cluster([{ host: <cfg-endpoint>]) which is aware about multiple shards.
Otherwise disable the Redis Cluster Mode and use the primary (master) dns name of your ElastiCache cluster. Even without Redis Cluster Mode there is still a Failover Strategy (AWS will replace primary node when it fails) and you can deploy Nodes into multiple Availablity Zones.
Also when you have Encryption enabled you will need to connect to AWS ElastiCache with TLS enabled (tls: {} option for ioredis).
https://aws.amazon.com/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/
You can read more about the MOVE error here:
https://redis.io/topics/cluster-spec#moved-redirection
"MOVED Redirection
A Redis client is free to send queries to every node in the cluster, including slave nodes. The node will analyze the query, and if it is acceptable (that is, only a single key is mentioned in the query, or the multiple keys mentioned are all to the same hash slot) it will lookup what node is responsible for the hash slot where the key or keys belong.
If the hash slot is served by the node, the query is simply processed, otherwise the node will check its internal hash slot to node map, and will reply to the client with a MOVED error, like in the following example:
GET x
-MOVED 3999 127.0.0.1:6381
The error includes the hash slot of the key (3999) and the ip:port of the instance that can serve the query. The client needs to reissue the query to the specified node's IP address and port. Note that even if the client waits a long time before reissuing the query, and in the meantime the cluster configuration changed, the destination node will reply again with a MOVED error if the hash slot 3999 is now served by another node. The same happens if the contacted node had no updated information."
ioredis supports redis cluster. So you should be creating the redis cluster like so:
new Redis.Cluster([{
host: process.env.REDIS_ENDPOINT, // Configuration endpoint address from Elasticache
port: process.env.REDIS_PORT
}]);
ioredis will take care of MOVE redirection errors.
const Redis = require("ioredis");
const connection = await new Redis.Cluster(
[redisHost,redisPort],
{
scaleReads: "all",
redisOptions: {
username: redisUserName,
password: redisPassword,
enableAutoPipelining: true,
}
);
The above configurations solves my issue, and I'm using ioredis insted of 'redis'.
If this doesn't help:
const Redis = require('ioredis');
const ioCluster = new Redis.Cluster([redisConfig.redis]);
const slackQueue = new Queue('slack notifications', {
prefix: '{slack}' ,
createClient: () => ioCluster
});
const emailQueue = new Queue('email notifications', {
prefix: '{email}' ,
createClient: () => ioCluster
});
I would go without ioredis or try to downgrade redis engine to 4.x
Related
We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.
So I recently installed stable/redis-ha cluster (https://github.com/helm/charts/tree/master/stable/redis-ha) on my G-Cloud based kubernetes cluster. The cluster was installed as a "Headless Service" without a ClusterIP. There are 3 pods that make up this cluster one of which is elected master.
The cluster has installed with no issues and can be accessed via redis-cli from my local pc (after port-forwarding with kubectl).
The output from the cluster install provided me with DNS name for the cluster. Because the service is a headless I am using the following DNS Name
port_name.port_protocol.svc.namespace.svc.cluster.local (As specified by the documentation)
When attempting to connect I get the following error:
"redis.exceptions.ConnectionError: Error -2 connecting to
port_name.port_protocol.svc.namespace.svc.cluster.local :6379. Name does not
resolve."
This is not working.
Not sure what to do here. Any help would be greatly appreciated.
the DNS appears to be incorrect. it should be in the below format
<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis service name is redis and namespace is default then it should be
redis.default.svc.cluster.local:6379
you can also use pod dns, like below
<redis-pod-name>.<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis pod name is redis-0 and redis service name is redis and namespace is default then it should be
redis-0.redis.default.svc.cluster.local:6379
assuming the service port is same as container port and that is 6379
Not sure if this is still relevant. Just enhance the chart similar to other charts to support NodePort, e.g. rabbitmq-ha so that you can use any node ip and configured node port if you want to access redis from outside the cluster.
I have 2 rabbitmq nodes.
Their node names are: rabbit#testhost1 and rabbit#testhost2
I'd like them can auto cluster.
On testhost1
# cat /etc/rabbitmq/rabbitmq.conf
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#testhost1
cluster_formation.classic_config.nodes.2 = rabbit#testhost2
On testhost2
# cat /etc/rabbitmq/rabbitmq.conf
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#testhost1
cluster_formation.classic_config.nodes.2 = rabbit#testhost2
I start rabbit#testhost1 first and then rabbit#testhost2.
The second node didn't join to the cluster of first node.
While node rabbit#testhost1 can join rabbit#testhost2 with rabbitmqctl command: rabbitmqctl join_cluster rabbit#testhost2.
So the network between should not have problem.
Could you give me some idea about why can't combine cluster? Is the configuration nor correct?
I have opened the debug log and the info related to rabbit_peer_discovery_classic_config is very little:
2019-01-28 16:56:47.913 [info] <0.250.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
The rabbitmq version is 3.7.8
Did you start the nodes without cluster config before you attempted clustering?
I had started individual peers with the default configuration once before I added cluster formation settings to the config file. By starting a node without clustering config, it seems form a cluster of its own, and on further start would only contact the last known cluster (self).
From https://www.rabbitmq.com/cluster-formation.html
How Peer Discovery Works
When a node starts and detects it doesn't have a previously initialised database, it will check if there's a peer discovery mechanism configured. If that's the case, it will then perform the discovery and attempt to contact each discovered peer in order. Finally, it will attempt to join the cluster of the first reachable peer.
You should be able to reset the node with rabbitmqctl reset (Warning: This removes all data from the management database, such as configured users and vhosts, and deletes all persistent messages along with clustering information.) and then use clustering config.
I have a GoogleCloud Kubernetes cluster consisting of multiple nodes and a GoogleCloud Redis Memorystore. Distributed over these nodes are replicas of a pod containing a container that needs to connect to the Redis Memorystore. I have noticed that one of the nodes is not able to connect to Redis, i.e. any container in a pod on that node cannot connect to Redis.
The Redis Memorystore has the following properties:
IP address: 10.0.6.12
Instance IP address range: 10.0.6.8/29 (10.0.6.8 - 10.0.6.15)
The node from which no connection to Redis can be made has the following properties:
Internal IP: 10.132.0.5
PodCIDR: 10.0.6.0/24 (10.0.6.0 - 10.0.6.255)
I assume this problem is caused by the overlap in IP ranges of the Memorystore and this node. Is this assumption correct?
If this is the problem I would like to change the IP range of the node.
I have tried to do this by editing spec.podCIRD in the node config:
$ kubectl edit node <node-name>
However this did not work and resulted in the error message:
# * spec.podCIDR: Forbidden: node updates may not change podCIDR except from "" to valid
# * []: Forbidden: node updates may only change labels, taints, or capacity (or configSource, if the DynamicKubeletConfig feature gate is enabled)
Is there another way to change the IP range of an existing Kubernetes node? If so, how?
Sometimes I need to temporarily increase the number of pods in a cluster. When I do this I want to prevent Kubernetes from creating a new node with the IP range 10.0.6.0/24.
Is it possible to tell the Kubernetes cluster to not create new nodes with the IP range 10.0.6.0/24? If so, how?
Thanks in advance!
Not for a node. The podCidr gets defined when you install your network overlay in initial steps when setting up a new cluster.
Yes for the cluster. but it's not that easy. You have to change the podCidr for the network overlay in your whole cluster. It's a tricky process that can be done, but if you are doing that you might as well deploy a new cluster. Keep in mind that some network overlays require a very specific PodCidr. For example, Calico requires 192.168.0.0/16
You could:
Create a new cluster with a new cidr and move your workloads gradually.
Change the IP address cidr where your GoogleCloud Redis Memorystore lives.
Hope it helps!
We are implementing scale-out for our SignalR app and trying to avoid a single point of failure in our cluster. Thus, more than one Redis message bus server is required.
The problem with implementing Redis Sentinel is that upon fail-over, a the client needs to connect to a new endpoint [address], which would require the SignalR application to be restarted (Redis endpoint defined in Application_Start()).
Not an option.
I'm trying to understand if/how Booksleeve will help, and would like some explain this.
The issue is that we can only have one single endpoint defined for message bus. A hardware solution is not currently an option.
Would the SignalR application connect to a Booksleeve wrapper, which maintains the list of master/slaves?
Another option using Azure Service Bus. However, the instructions for Wiring Up the Windows Azure Service Bus Provider indicate there are still problems with this:
Note, this web site is an ASP.NET site that runs in an Azure web role.
As of 1.0alpha2 there are some bugs in AzureWebSites due to which
ServiceBus Scale out scenarios do not work well. We are working on
resolving this for the future
I don't know the specifics of how SignalR does the connect, but: BookSleeve already offers some concessions towards failover nodes. In particular, the ConnectionUtils.Connect method takes a string (ideal for web.config configuration values etc), which can include multiple redis nodes, and BookSleeve will then try to locate the most appropriate node to connect to. If the nodes mentioned in the string are regular redis nodes, it will attempt to connect to a master, otherwise falling back to a slave (optionally promoting the slave in the process). If the nodes mentioned are sentinel nodes, it will ask sentinel to nominate a serer to connect to.
What BookSleeve doesn't offer at the moment is a redundant connection wrapper that will automatically reconnect. That is on the road-map, but isn't difficult to do in the calling code. I plan to add more support for this at the same time as implementing redis-cluster support.
But: all that is from a BookSleeve perspective - I can't comment on SignalR specifically.
BookSleeve 1.3.41.0 supports Redis sentinel. Deployment configuration we use: 1 master redis, 1 slave redis. Each box has sentinel (one for master, one for slave). Clients connect to sentinel first, sentinel then redirects them to active master.
This is how it is implemented in client code:
public class OwinStartup
{
public void Configuration(IAppBuilder app)
{
var config = new WebClientRedisScaleoutConfiguration();
GlobalHost.DependencyResolver.UseRedis(config);
app.MapSignalR();
}
}
public class WebClientRedisScaleoutConfiguration : RedisScaleoutConfiguration
{
public WebClientRedisScaleoutConfiguration()
: base(() => getRedisConnection(), WebUIConfiguration.Default.Redis.EventKey)
{ }
private static BookSleeve.RedisConnection _recentConnection;
private static BookSleeve.RedisConnection getRedisConnection()
{
var log = new TraceTextWriter();
var connection = BookSleeve.ConnectionUtils.Connect("sentinel1:23679,sentinel2:23679,serviceName=WebClient", log);
if (connection != null)
{
_recentConnection = connection;
return connection;
}
if (_recentConnection != null)
{
return _recentConnection;
}
// Cannot return null nor throw exception -- this will break reconnection cycle.
return new BookSleeve.RedisConnection(string.Empty);
}
}
Hot to configure redis.
Common steps
Download Redis for windows http://redis.io/download
Unzip to c:\redis
Master (only very first redis box, only one such config)
Create Redis service: execute command within redis directory redis-server --service-install redis.conf --service-name redis
Start Redis service
Ensure Redis is listing port 6379
Slave (other boxes)
Update redis.conf: add line slaveof masterAddr 6379 where masterAddr
is address where redis in master mode is running, 6379 is default
redis port.
Create Redis service: execute command within redis directory redis-server --service-install redis.conf --service-name redis
Start Redis service
Ensure Redis is listing port 6379
Sentinel (common for master and slave)
Create file redis-sentinel.conf with content:
port 26379
logfile "redis-sentinel1.log"
sentinel monitor WebClient masterAddr 6379 1
where masterAddr is address where redis in master mode is running,
6379 is default redis port, 1 is quorum (number of host that makes
decision is server down or not). WebClient is group name. You specify it in client code ConnectionUtils.Connect("...,serviceName=WebClient...")
Create redis sentinel service:execute command within redis directory redis-server --service-install redis-sentinel.conf --service-name redis-sentinel --sentinel
Start redis-sentinel service