Background:
We have a single cluster containing 2 app svr nodes n 3 replicated db nodes. Our application is a .net app, deployed on Linux app svrs.
We will move to a multi-cluster architecture in separate continents in the near future. Those clusters will replicate our existing cluster.
Question 1:
Can I use Zookeeper as means to achieve consistency. Example: I would like to avoid an operation in NY n a similar operation in EU occurring simultaneously n are inconsistent.
Everything I read about Zookeeper points out to a single cluster solution n I would like to avoid implementing my own distributed locks.
Question 2:
Do you have a suggestion different than implementing Zookeeper?
Many thx
The reason I did not get any responses is, more than likely, due to the fact that what I am asking for is impossible at worse, or will bring the clusters to their knees due to the synchronization demands, at best.
I now believe that I will be better off allowing inconsistencies and dealing with them during the eventual consistency sync step.
Related
Is there any benefits of making multiple ConnectionMultiplexer instances when using StackExchange.Redis? We are making heavy read/write call to Azure Redis Cache and wondering how much load can single ConnetionMultiplexer can handle.
Currently we have a pool of ConnectionMultiplers in array format and pick one randomly to handle concurrent calls. If single ConnectionMultiplexer can do the job, then tis is unnecessary implement.
Only sometimes. My experience is that apps which run at the right scale, on the right kind of hardware, with the right kind of load to need multiple connection multiplexers (for a single redis cache) are few and far between.
More specifically, most apps get by just fine with just one ConnectionMultiplexer. But I have seen a couple cases where you might be better off using num_cpus / 4. These are generally apps with not too many client machines (e.g. < 1000), and each client machine is fairly powerful (e.g 8+ cores).
One other scenario where you might possibly see some benefit is with transient connection breaks due to packet loss, but you might want to fix your n etwork in that case
I would like to know what is the best practice for using Redis in cloud (Google Memorystore in my case, Standard Tier) for multiple microservices/applications. From what I have researched so far following options are available:
Use single cluster and database, scaled horizontally for all the microservices. This seems most cost-effective as I will use the exact amount of nodes I will need for the whole system. The data isolation is impacted here, but I can reduce the impact e.g. by prefixing the keys with the microservice name.
Use separate clusters and databases for each microservice. In this case the isolation is better, the scaling of the needed cluster will impact a single microservice only, but this doesn't seem cost effective, as many nodes may be underloaded (e.g. microservice M1 utilizes 50% capacity of a node, microservice M2 utilizes 40% capacity of a node so in case 1 both microservices would by served only by a single node).
In theory I could use multiple databases to isolated data in a single cluster, but as far as I have read this is not supported by Redis (and using multiple databases on a single node causes performance issues).
I am leaning towards option 1., but perhaps I am missing something?
Not sure about best practices, I will tell you my experience.
In general I would go with Option #2.
Each microservices gets it's own redis instance or cluster.
Redis clusters follow their own microservice life. Ex they might get respawned when you redeploy or restart a service.
You might pay a bit more but you gain in resiliency and maintenance hassle.
It's my understanding that best practice for redis involves many keys with small values.
However, we have dozens of keys that we'd like to have store a few MB each. When traffic is low, this works out most of the time, but in high traffic situations, we find that we have timeout errors start to stack up. This causes issues for all of our tiny requests to redis, which were previously reliable.
The large values are for optimizing a key part of our site's functionality, and a real performance boost when it's going well.
Is there a good way to isolate these large values so that they don't interfere with the network I/O of our best practice-sized values?
Note, we don't need to dynamically discover if a value is >100KB or in the MBs. We have a specific method that we could have use a separate redis server/instance/database/node/shard/partition (I'm not a hardware guy).
Just install/configure as many instances as needed (2 in the case), each managing independently on a logical subset if keys (e.g. big and small), with routing done by the application. Simple and effective - divide and converter conquer
The correct solution would would be to have 2 separate redis clusters, one for big sized keys, and another one for small sized keys. These 2 clusters could run on the same set of physical or virtual machines, aka multitenancy (You would want to do that to fully utilize the underlying cores on your machine, as redis server is single threaded). This way you would be able to scale both the clusters separately, and your problem of small requests timing out because of queueing behind the bigger ones will be alleviated.
In a recent interview, i was asked to design a distributed message queue. I modeled it as a multi-partitioned system where each partition has a replica set with one primary and one or more secondaries for high availability. The writes from the producer are processed by the primary and are replicated synchronously, which means a message is not committed unless a quorum of the replica set has applied it. He then identified the potential availability problem when the primary of a replica set dies (which means a producer writing to that partition won't be able to write until a new primary is elected for the replica set) and asked me about the solution where the producer writes to the same message to multiple servers (favoring availability instead of consistency). He then asked me what would be the difference if the client wrote to 2 servers vs 3 servers, a question i failed to answer. In general, i thought it was more of an Even vs Odd question and I guessed it had something to do with quorums (i.e. majority) but failed to see how it would impact a consumer reading data. Needless to say, this question cost me the job and still continues to puzzle me to this day. I would appreciate any solutions and/or insights and/or suggestions for one.
Ok, this is what I understood from your question about the new system:
You won't have a primary replica anymore so you don't need to elect one and instead will work simply on a quorum based system to have a higher availability? - if that is correct than maybe this will give you some closure :) - otherwise feel free to correct me.
Assuming you read and write from / to multiple random nodes and those nodes don't replicate the data on their own, the solution lies in the principle of quorums. In simple cases that means that you need to write and read always at least to/from n/2 + 1 nodes. So if you would write to 3 nodes you could have up to 5 servers, while if you'd write to 2 nodes you could only have up to 3 servers.
The slightly more complicated quorum is based on the rules:
R + W > N
W > N / 2
(R - read quorum, W - write quorum, N - number of nodes)
This would give you some more variations for
from how many servers you need to read
how many servers you can have in general
From my understanding for the question, that is what I would have used to formulate an answer and I don't think that the difference between 2 and 3 has anything to do with even or odd numbers. Do you think this is the answer your interviewer was looking for or did I miss something?
Update:
To clarify as the thoughts in the comment are, which value would be accepted.
In the quorum as I've described it, you would accept the latest value. The can be determined with a simple logical clock. The quorums guarantee that you will retrieve at least one item with the latest information. And in case of a network partitioning or failure when you can't read the quorum, you will know that it's impossible guarantee retrieving the latest value.
On the other hand you suggested to read all items and accept the most common one. I'm not sure, this alone will guarantee to have always the latest item.
I am looking at porting a Java application to .NET, the application currently uses EhCache quite heavily and insists that it wants to support strong consistency (http://ehcache.org/documentation/get-started/consistency-options).
I am would like to use Redis in place of EhCache but does Redis support strong consistency or just support eventual consistency?
I've seen talk of a Redis Cluster but I guess this is a little way off release yet.
Or am I looking at this wrong? If Redis instance sat on a different server altogether and served two frontend servers how big could it get before we'd need to look at a Master / Slave style affair?
A single instance of Redis is consistent. There are options for consistency across many instances. #antirez (Redis developer) recently wrote a blog post, Redis data model and eventual consistency, and recommended Twemproxy for sharding Redis, which would give you consistency over many instances.
I don't know EhCache, so can't comment on whether Redis is a suitable replacement. One potential problem (porting to .NET) with Twemproxy is it seems to only run on Linux.
How big can a single Redis instance get? Depends on how much RAM you have.
How quickly will it get this big? Depends on how your data looks.
That said, in my experience Redis stores data quite efficiently. One app I have holds info for 200k users, 20k articles, all relationships between objects, weekly leader boards, stats, etc. (330k keys in total) in 400mb of RAM.
Redis is easy to use and fun to work with. Try it out and see if it meets your needs. If you do decide to use it and might one day want to shard, shard your data from the beginning.
Redis is not strongly consistent out of the box. You will probably need to apply 3rd party solutions to make it consistent. Here is a quote from docs:
Write safety
Redis Cluster uses asynchronous replication between nodes, and last failover wins implicit merge function. This means that the last elected master dataset eventually replaces all the other replicas. There is always a window of time when it is possible to lose writes during partitions. However these windows are very different in the case of a client that is connected to the majority of masters, and a client that is connected to the minority of masters.
Usually you need to have synchronous replication to achieve strong consistence in a distributed partitioned systems.