PHP.INI & session.save_path & ElastiCache - amazon-elasticache

Below is from http://php.net/manual/en/memcache.ini.php#ini.memcache.hash-strategy
session.save_path string Defines a comma separated of server urls to
use for session storage, for example "tcp://host1:11211,
tcp://host2:11211".
Question:
AWS ElastiCache gives you node endpoints and a configuration endpoint (which I believe is a DNS CNAME to the ElastiCache Cluster).
If I put the configuration endpoint value into session.save_path will this mean Sessions use the Cluster rather then a specific node and therefore always use an active node?
I understand if a node is rebooted/removed the data held will be lost and therefore sessions on that node will be lost.
thank you!

No, it does not work that way. You need to use the Amazon memcached client that has support for Auto Discovery.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html

Related

How to determine whether connected Redis is a single node or cluster mode using StackExchange.Redis?

I only have Azure Redis cache hostname but no other information. Is there any way to tell whether this Redis host name has a Cluster of nodes or just a single node? I am using c# with stackExchange.Redis.
It is not feasible to judge only by Azure Redis cache hostname.
You need at least hostname, groupname, and bearer token.
You can create HttpClient in C# code and use rest api to query the value of shardCount. According to the value of shardCount, you can determine whether your azure redis cache has the cluster function enabled.
If shardCount=1, then it is a single node. If it is greater than 1, it means that the cluster size function is enabled on the portal.
Sample pic:
On portal:
Test rest api online.

How to make AWS Elasticache Redis split read requests across all read replicas?

I have a Redis Elasticache non-clustered instance with one primary node and two read replicas. Using the Stack Exchange Redis client I provide the reader endpoint and make a get request. Based on the documentation I would expect:
A reader endpoint will split incoming connections to the endpoint
between all read replicas in a Redis cluster.
However, 100% of the requests go to one of the read replicas. First question, why? Second question, how do I get Redis to distribute the load across all of the read replicas without having to manage the read instances at an application level?
You should use the "Reader Endpoint" connection string. (the connection string with "-ro")
This will split the connection between your replicas in case you have more than one connection to the Redis Cache server. Also to achieve this you need to have significant CPU usage to the first redis-replica server.

How to configure Redis DB to be Read Only?

I have a Redis database (no cluster, no replica).
How can i configure it to be read only? (so client can not modify it)
I do not want to set up a replication or cluster.
Thanks in advance
There is no such configuration for Redis master. Only replicas can be configured as read-only. If you have control over the Redis client library used by your clients, you can change it to expose only read methods to the clients.

Redis cluster via HAProxy

I have a Redis Cluster that clients are connecting to via HAPRoxy with a Virtual IP. The Redis cluster has three nodes (with each node sharing the same server with a running sentinel instance).
My question is, when i clients gets a "MOVED" error/message from a cluster node upon sending a request, does it bypass the HAProxy the second time when it connects since it has been provided with an IP:port when the MOVEd message was issued? If not, how does the HAProxy know the second time to send it to the correct node?
I just need to understand how this works under the hood.
If you want to use HAProxy in front of Redis Cluster nodes, you will need to either:
Set up an HAProxy for each master/slave pair, and wire up something to update HAProxy when a failure happens, as well as probably intercept the topology related commands to insert the virtual IPs rather than the IPs the nodes themselves have and report via the topology commands/responses.
Customize HAProxy to teach it how to be the cluster-aware Redis client so the actual client doesn't know about cluster at all. This means teaching it the Redis protocol, storing the cluster's topology information, and selecting the node to query based on the key(s) being accessed by the consumer code.
With Redis Cluster the client must be able to access every node in the cluster. Of the two options above Option 2 is the "easier" one, but at this point I wouldn't recommend either.
Conceivably you could use the VIP as a "first place to get the topology info" IP but I suspect you'd have serious issues develop as that original IP would not be one of the ones properly being reported as a nod handling data. For that you could simply use round-robin DNS and avoid that problem, or use the built-in "here is a list of cluster IPs (or names?)" to the initial connection configuration.
Your simplest, and least likely to be problematic, route is to go "full native" and simply give full and direct access to every node in the cluster to your clients and not use HAProxy at all.

Failing over with single Replication Group on ElastiCache Redis

I'm testing out ElastiCache backed by Redis with the following specs:
Using Redis 2.8, with Multi-AZ
Single replication group
1 master node in us-east-1b, 1 slave node in us-east-1c, 1 slave node in us-east-1d
The part of the application writing is directly using the endpoint for the master node (primary-node.use1.cache.amazonaws.com)
The part of the application doing only reads is pointing to a custom endpoint (readonly.redis.mydomain.com) configured in HAProxy, which then points to the two other read slave end points. (readslave1.use1.cache.amazonaws.com and readslave2.use1.cache.amazonaws.com)
Now lets say the primary node (master) fails in us-east-1b.
From what I understand, if the master instance fails, I won't have to change the url for the end point for writing to Redis (primary-node.use1.cache.amazonaws.com), although from there, I still have the following questions:
Do I have to change the endpoint names for the read only slaves?
How long until the missing slave is added into the pool?
If there's anything else I'm missing, I'd appreciate the advice/information.
Thanks!
If you are using ElastiCache, you should make use the "Primary EndpointThe" provided by AWS.
That endpoint actually is backed by Route53, if the primary (master) redis is down, since you enable MutliA-Z, it will auto fail over to one of the read replica (slave).
In that case, you don't need to modify the endpoint of your redis.
I don't know why you have such design, seems you only want write to master, but always read from slave.
For HA Proxy part, you should include TCP check for ALL 3 redis nodes, using their "Read Endpoint"
In haproxy, you can check if the endpoint is SLAVE, if yes, your haproxy should redirect the traffic to that.
Notice that in the application layer, if your redis driver don't support auto reconnect, your script will fail to connect to the new master nodes.
In addition to "auto reconnect", since AWS is using Route53 DNS to do fail over, some lib will NOT do NS lookup again, which means the DNS is still pointing to the OLD ip which is the old master.
Using HAproxy can solve this problem.