Roll back Gcloud Redis upgrade - redis

I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.

In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.

Related

How can I setup Redis Cluster mode or master slave mode in PCF?

This is regarding the use case where we are trying to use the Redis in PCF (Pivotal Cloud Foundry). In our use case, we will refresh the Redis cache daily once or twice with the required data and then API will query Redis and then provide the response.
One thing of particular concern for us is that we want API queries to happen from Redis only that means Redis to be available at all times. But whenever we are refreshing the Redis DB, Redis would not be able to serve the APIs since it is refreshing the keys. To avoid that we wanted to setup a Redis in cluster mode or master-slave mode so if one instance is being written another can be read from.
How can we setup Redis cluster or master-slave mode in PCF and then fulfil our requirement?
Please provide any other suggestions as well that you may have.
At the time I write this, the Redis for Pivotal Platform product does not support clustering. See Availability, in the docs here -> https://docs.pivotal.io/redis/2-3/erc.html#offerings.
All Redis for Pivotal Platform services are single VMs without clustering capabilities. This means that planned maintenance jobs (e.g., upgrades) can result in 2–10 minutes of downtime, depending on the nature of the upgrade. Unplanned downtime (e.g., VM failure) also affects the Redis service.
Redis for Pivotal Platform has been used successfully in enterprise-ready apps that can tolerate downtime. Pre-existing data is not lost during downtime with the default persistence configuration. Successful apps include those where the downtime is passively handled or where the app handles failover logic.
If you require clustered Redis, you'd need to look at a different offering. Redis Labs has some offerings that integrate with PCF, you could use a Cloud Provider's Redis offering, or you could host your own.
If the solution you use isn't integrated into PCF, you can create a user-provided service with cf cups and provide the Redis credentials to your application that way. It will function just like a Redis service instance created through the marketplace.

Redis on Azure VM vs Azure Redis Cache

We have checked both Redis installed in Azure VM and Azure Redis Cache both are working same I can't see a difference in the performance Have anyone used both in large scale application if so can anyone share the performance and durability of both ?
Have analysed the following
Monitoring
In-zone replication
Multi-zone replication
Auto fail-over
Data persistence
Backup
Pricing
SSL Authentication & Encryption
All the above Azure redis have the upper hand
Still I want make sure which one is the best
Does using VM has any bottlenecks ?
I would go for Azure Redis Cache. Mainly because its fully managed. At the end of the day you do have nodes under the hood. But why should you care for maintaining a VM? Hotfixes? Patches, Seucirty Updates ..etc ..etc.
I would ask the question the other way around. Why should you use VMs at all?
MG

Redis as a queue - Configuration review

I need to setup a Redis DB (2.8), which i suppose to use as a queue, which means that it's must be fully persistent (no message can be missed).
I'm pretty new with Redis, and i would like a get a review for my configuration:
I want to use both AOF and RDB persistence models, while always will be selected as appendfsync policy. According to their decontamination, always is not recommended, but i must select this option as i use Redis as a queue, and i can't endure any massages missing.
I would like a create a Master-Slave-Slave cluster using Sentinel with automatic failover.
Redis service will be automatically started after server boot.
Any kind of comments and suggestions will be great. The administration point of view is more important to me (persistence, backup, restore, high availability, etc).

Fast Multitenant Caching - Local Caching in Addition to Distributed Caching. Or are they the same thing?

I am working on converting an existing single tenant application to a multitenant one. Distributed caching is new to me. We have an existing primitive local cache system using the .NET cache that generates cloned objects from existing cached ones. I have been looking at utilizing Redis.
Can Redis cache and invalidate locally in addition to over the wire thus replacing all benefit of the local primitive cache? Or would it potentially be an ideal approach to have a tiered approach utilizing Redis distributed cache if the local one doesn't have the objects we need? I believe the latter would imply requiring expiration notifications to happen against the local caches when data is updated otherwise servers may have out of date inconsistent data
It seems like a set of local caches with expiration notifications would also qualify as a distributed cache, so I am a bit confused here on how Redis might be configured and if it would be distributed accross the servers serving the requests or live in it's own cluster.
When I say local I am implying not having to go over the wire for data.

Is the RavenDB subscription storage a central point of failure for NServiceBus?

I am evaluating using NServiceBus as a SOA mechanism in our product. I'm looking into using the publish/subscribe pattern and my understanding is that the subscription service will store all subscriptions.
Does that mean that if my RavenDB server goes down then my publishers lose the ability to send to subscribers? Or is there a way for the publishers to cache the subscribers it has and if RavenDB were to go down then it would deliver to its known subscribers?
You can run the RavenDB server as a replicated node, to avoid this being a single point of failure.
The general pattern is for an endpoint to have a master node that acts as worker and distributor, and then the master node uses a Raven installation on that same server to store its subscriptions and saga storage.
So, it is a point of failure for that one endpoint, but other endpoints in the distributed system will use the Raven installs on their own servers. Thus, the system is kept distributed and the entire system does not have a single point of failure. RavenDB enables this because it is fairly easy to install it on any server.
Contrast this to SQL Server, which is frequently centralized, scaled up to the max, and even clustered in order to provide high availability. (Read: expensive!)
You can also run RavenDB in a Windows failover cluster where the nodes use a shared SAN for the RavenDB data files. If the active node dies, another takes over. Since the data is stored on the SAN, you shouldn't even notice it except the time it takes to start up the RavenDB windows service on the new node. Check out http://ravendb.net/docs/server/administration/fmc_configuration
This is also the recommended setup for High Availability when running with Distributors. http://docs.particular.net/nservicebus/scalability-and-ha/distributor/