I have a web farm in amazon and one of my sites need some caching.
I am considering the use of Elasticache redis.
Can anyone shed some ligth on how I would connect and interact with this cache?
I have read about several client sdks like stackexchange redis, service stack etc.
.NET is my preferred platform.
Can these client sdks be used to interact with redis on elasticache?
Anyone know about some documentation and/or code examples using elasticache redis (with the stackexchange redis sdk)?
Im guessing I will have to authenticate using a key / secret pair, is this supported in any of these client sdks?
thanks in advance!
Lars
Elasticache is connected to the same way you connect to any other Redis instance. Once you create a new Elasticache instance, you'll be given the hostname to connect to. No need for secret/key pair. All access to the Redis instance there is configured through security groups just like with other AWS instances in EC2, RDS, etc...
With that said, there are two important caveats:
You will only be able to connect to elasticache from within the region and/or VPC in which it's launched, even if you open up the security group to outside IPs (for me, this is one of the biggest reasons not to use Elasticache).
You cannot set a password on your Redis instance. Anyone on a box that is given access to the instance in security groups (keeping in mind the limitations from caveat 1) will be able to get access to your Redis instance with full rights to add/delete/modify whatever keys they like. This is the other big reason not to use Elasticache, though it certainly still has use-cases where these drawbacks are less important.
Related
This is regarding the use case where we are trying to use the Redis in PCF (Pivotal Cloud Foundry). In our use case, we will refresh the Redis cache daily once or twice with the required data and then API will query Redis and then provide the response.
One thing of particular concern for us is that we want API queries to happen from Redis only that means Redis to be available at all times. But whenever we are refreshing the Redis DB, Redis would not be able to serve the APIs since it is refreshing the keys. To avoid that we wanted to setup a Redis in cluster mode or master-slave mode so if one instance is being written another can be read from.
How can we setup Redis cluster or master-slave mode in PCF and then fulfil our requirement?
Please provide any other suggestions as well that you may have.
At the time I write this, the Redis for Pivotal Platform product does not support clustering. See Availability, in the docs here -> https://docs.pivotal.io/redis/2-3/erc.html#offerings.
All Redis for Pivotal Platform services are single VMs without clustering capabilities. This means that planned maintenance jobs (e.g., upgrades) can result in 2–10 minutes of downtime, depending on the nature of the upgrade. Unplanned downtime (e.g., VM failure) also affects the Redis service.
Redis for Pivotal Platform has been used successfully in enterprise-ready apps that can tolerate downtime. Pre-existing data is not lost during downtime with the default persistence configuration. Successful apps include those where the downtime is passively handled or where the app handles failover logic.
If you require clustered Redis, you'd need to look at a different offering. Redis Labs has some offerings that integrate with PCF, you could use a Cloud Provider's Redis offering, or you could host your own.
If the solution you use isn't integrated into PCF, you can create a user-provided service with cf cups and provide the Redis credentials to your application that way. It will function just like a Redis service instance created through the marketplace.
To resolve a few issues we are running into with docker and running multiple instances of some services, we need to be able to share values between running instances of the same docker image. The original solution I found was to create a storage account in Azure (where we are running our kubernetes instance that houses the containers) and a Key Vault in Azure, accessing both via the well defined APIs that microsoft has provided for Data Protection (detailed here).
Our architect instead wants to use Kubernetes Persitsent Volumes, but he has not provided information on how to accomplish this (he just wants to save money on the azure subscription by not having an additional storage account or key storage). I'm very new to kubernetes and have no real idea how to accomplish this, and my searches so far have not come up with much usefulness.
Is there an extension method that should be used for Persistent Volumes? Would this just act like a shared file location and be accessible with the PersistKeysToFileSystem API for Data Protection? Any resources that you could point me to would be greatly appreciated.
A PersistentVolume with Kubernetes in Azure will not give you the same exact functionality as Key Vault in Azure.
PesistentVolume:
Store locally on a mounted volume on a server
Volume can be encrypted
Volume moves with the pod.
If the pod starts on a different server, the volume moves.
Accessing volume from other pods is not that easy.
You can control performance by assigning guaranteed IOPs to the volume (from the cloud provider)
Key Vault:
Store keys in a centralized location managed by Azure
Data is encrypted at rest and in transit.
You rely on a remote API rather than a local file system.
There might be a performance hit by going to an external service
I assume this not to be a major problem in Azure.
Kubernetes pods can access the service from anywhere as long as they have network connectivity to the service.
Less maintenance time, since it's already maintained by Azure.
We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.
Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.
I am currently setting up an infrastructure for an App in AWS. App is written in Django and is using Redis for some transactions. High availability is key for this application and I am having a hard time trying to get my head around how to configure Redis for High availability.
Application level changes are not an option.
Ideally I would like to have a redis setup, to which I can write and read and replicate and scale when required.
Current Setup is a Redis Fail-over scenario with HAProxy --> Redis Master --> Replica Slave.
Could someone guide me understand various options ? and how to scale redis for high availability !
Use AWS ElastiCache Redis Cluster with Multi-AZ. They provides automatic fail-over. It provides endpoint to access master node.
If master goes down AWS route your endpoint to another node. everything happens automatically, you don't have to do anything.
Just make sure that if you are doing DNS to IP caching in your application, its set to 60 seconds or so instead of default.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html
Thanks,
KS