What is the difference between known-replica and known-slave configuration parameters?
I have updated redis version from 3.X to 6.X and now I have both in configuration.
As described here, Redis decided to deprecate use of the word "slave" and replace it with alternative terms whenever it was possible to do so without harming backwards compatibility.
So these terms are synonyms, as you can see in the source code.
When you have multiple values of the same command in a configuration file, as you now do, the last-processed command is what gets used.
Related
I have a nimbus+storm cluster using Zookeeper, and I wish to move my cluster and point it to a new Zookeeper. Do you know if this is possible? Can I keep all the information of the old zookeeper and save it in the new one? Is it possible to do it without downtime?
I have looked in the internet for this procedure but I have not found much.
Would it be as simples as change the storm.yml file in both the master . and worker nodes? Do I need a restart afterwards?
# storm.zookeeper.servers:
# - "server1"
# - "server2"
If you just change storm.yml, you'd be pointing Storm at a new empty Zookeeper cluster, and it will be like you just installed Storm from scratch. More likely, you want to grow your Zookeeper cluster to include your new machines, then update storm.yml to point at the new machines, then shrink the cluster to exclude the machines you want to move away from. That way, your Zookeeper quorum is preserved even though you've moved to other physical machines.
This is easier to do on Zookeeper 3.5 with dynamic reconfiguration http://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html. I'm unsure whether Storm will run on Zookeeper 3.5, but you may consider investigating whether you can upgrade to 3.5 before growing/shrinking the cluster.
Otherwise you will have to do a rolling restart to add the new Zookeeper nodes, then do another one to remove the old machines once the cluster has stabilized.
Let me suggest a hack here. This was a script provided by microsoft for migration on HD Insight cluster , but you can change it and use it for your need.
The script can be downloaded from : https://github.com/hdinsight/hdinsight-storm-examples/tree/master/tools/zkdatatool-1.0 and you can read more about it here :
https://blogs.msdn.microsoft.com/azuredatalake/2017/02/24/restarting-storm-eventhub/
I have used it in the past when i had to migrate some stuff between PaaS clusters and i can confirm it works ok!
I have three redis nodes being watched by 3 sentinels. I've searched around and the documentation seems to be unclear as to how best to upgrade a configuration of this type. I'm currently on version 3.0.6 and I want to upgrade to the latest 5.0.5. I have a few questions on the procedure around this.
Is it ok to upgrade two major versions? I did this in our staging environment and it seemed to be fine. We use pretty basic redis functionality and there are no breaking changes between the versions.
Does order matter? Should I upgrade say all the sentinels first and then the redis nodes, or should the sentinel plane be last after verifying the redis plane? Should I do one sentinel/redis node at a time?
Any advice or experience on this would be appreciated.
I am surprised by the lack of response to this, but I understand that the subject kind of straddles something like stackoverflow and something like stack exchange. I'm also surprised at the lack of documentation I was able to find on the subject.
I did some extensive testing in a staging environment and then proceeded to our production and the procedure I followed seemed to work for the most part:
Upgrading from 3.0.6 to 5.0.5 in our case seems to be working without a hitch. As I said in the original post, we use the basics in redis and there hasn't been much changed from the client perspective.
I went forward upgrading in this order:
The first two sentinel peers and then the sentinel currently in the leader status.
Each of the redis nodes listed as slaves (now known as replicas).
After each node is upgraded, it will want to copy its dump.rdb from the master
A sync can be done to a 5 node from a 3 node, but once a 5 node is the master, a 3 node cannot sync, so once you've failed over to an upgraded node, you can't go back to the earlier version.
Finally use the sentinels to failover to an upgraded node as master and upgrade the former master
Hopefully someone might find this useful going forward.
I have a basic question about Redis connection parameters from CacheManager.NET perspective. In case when we have Redis cluster with a master and 2 slaves, and with quorum of sentinel processes, should we provide the IP:PORT combinations pointing to the sentinel processes OR the actual Redis server processes.
As suggested in https://seanmcgary.com/posts/how-to-build-a-fault-tolerant-redis-cluster-with-sentinel, it is advisable to ask the sentinel process about the actual master before making the connection. And probably that goes in line with Jedis which provides JedisSentinelPool to do the initial lookup.
Essentially what we want is that the load balancing on reads (via CacheManager.NET) and the writes should go to the current master node of the cluster.
CacheManager relies on StackExchange.Redis for the Redis implementation. Therefor, whatever this client library supports, CacheManager does, too.
Unfortunately, sentinel support is not implemented, there are issues on github for years regarding that
That being said, I did some testing with a Multi Master/Slave + Sentinel setup. Added all the non-sentinel nodes as endpoints to the Multiplexer configuration and it kinda works because the Redis Client knows how to handle multiple master/slave instances.
In the process of switching to another master, the client might throw exceptions that it cannot write to a readonly slave and such. CacheManager might retry those calls and after a short amount of time, when the leader election is done, the call should go through.
But this is not 100% stable and I would not put that in production, as "official" support is still missing...
Alternative to running with sentinels, you could run Redis in Cluster mode which should just work, or behind a proxy which deals with all that master/slave stuff.
Twemproxy is one alternative.
I still have to add support for Twemproxy to CacheManager, as many features are simply not available, like Lua scripting or get a list of servers or flush commands...
This will come in 1.0.2
Hope that helps.
I've currently got ActiveMQ 5.12.1 running and want to upgrade to ActiveMQ 5.14.1. I can't seem to find any documentation on upgrading from one version to another. Is it as simple as copying the files over? I don't want to lose any of my queues or subscribers.
The important data is in the persistence-store, i.e. typically KahaDB.
You can simply install the new ActiveMQ version and copy the store (or simply point it out if it's on a shared disk or whatnot) and all messages should remain.
Although the configuration is probably just fine to copy, I would make it a habit to look through the release notes of new features and improvements and see if there is something you can use. This is even more important if you use advanced AMQ features such as network of brokers or newer features where there are more updates, such as AMQP, MQTT and WebSockets.
If there is some major thing that needs attention from one version to another, there will be a note in the release-notes.
I just started evaluating Redis. I am using Redis 2.8.19 which the most latest stable release. Redis 2.9 is still unstable and Redis 3.0 is just available for developer's preview (not recommended for production). I was tryin to setus a cluster of Redis and when I changed my redis.conf and appended
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
and started my Redis server by
src/redis-server ./redis.conf
it gave me an error as follows
* FATAL CONFIG FILE ERROR *
Reading the configuration file, at line 2
'cluster-enabled yes'
Bad directive or wrong number of arguments
I googled the error and got to know that my version (2.8.19) does not support cluster. I was still unable to fine any such specification in Redis Docs. My question is simple. Does Redis 2.8.19 supports redis cluster configuration? Or I have to upgrade to Redis 2.9 or Redis 3.0. I am evaluating Redis because I need to deploy it in production. Please guide.
Redis Cluster support is only for versions >= 3.0.0. Redis 3.0.0 will be released as a stable version in a matter of days, it's a good idea to use it if you want to use Cluster. The cluster support is considered to be stable, however for it to be considered mature we want to see adoption. Btw there is already at least a very large site using it in production. Currently the most sane thing to do if you need Redis Cluster is to test it for your use case, and if it looks great, use it.
Redis cluster is supported only in Redis 3.0+ (which is now stable). I have written a simple API called "Simple Redis Cluster Client" which can be used in redis's sub 3.0 versions for running in a cluster like mode (Not precisely a cluster, it just distributes keys among redis nodes based on the key's hashcode, You can have a look # https://github.com/prash-mi/simple-redis-cluster-client
Cluster support for Redis is only from v3 - v2.8.19 doesn't do clustering.