What is the redis URI, when redis is used in kubernetes? - redis

Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)

As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.

Related

Google cloud kubernetes cluster newbie question

I am a newbie of GKE. I created a GKE cluster with very simple setup. It only has on gpu node and all other stuff was default. After the cluster is up, I was able to list the nodes and ssh into the nodes. But I have two questions here.
I tried to install nvidia driver using the command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
It output that:
kubectl apply --filename https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
daemonset.apps/nvidia-driver-installer configured
But 'nvidia-smi' cannot be found at all. Should I do something else to make it work?
On the worker node, there wasn't the .kube directory and the file 'config'. I had to copy it from the master node to the worker node to make things work. And the config file on the master node automatically updates so I have to copy again and again. Did I miss some steps in the creation of the cluster or how to resolve this problem?
I appreciate someone can shed some light on this. It drove me crazy after working on it for several days.
Tons of thanks.
Alex.
For the DaemonSet to work, you need to have a tag on your worker Node as cloud.google.com/gke-accelerator (see this line). The DaemonSet checks for this tag on a node before scheduling any pods for installing the driver. I'm guessing a default node pool you create did not have this tag on it. You can find more details on this on the GKE docs here.
The worker nodes, by design are just that worker nodes. They do not need privileged access to the Kubernetes API so they don't need any kubeconfig files. The communication between worker nodes and the API is strictly controlled through the kubelet binary running on the node. Therefore, you will never find kubeconfig files on a worker node. Also, you should never put them on the worker node either, since if a node gets compromised, the keys in that file can be used to damage the API Server. Instead, you should make it a habit to either use the master nodes for kubectl commands, or better yet, have the kubeconfig on your local machine, and keep it safe, and issue commands remotely to your cluster.
After all, all you need is access to an API endpoint for your Kubernetes API server, and it shouldn't matter where you access it from, as long as the endpoint is reachable. So, there is no need whatsoever to have kubeconfig on the worker nodes :)

Prometheus target management

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

Is it a good way to run Kafka on Kubernetes?

For a large online application, use k8s to run it. The scale maybe daily activity user 500,000.
The application inside k8s need messaging feature - Pub/Sub, there are these options:
Kafka
RabbitMQ
Redis
Kafka
It needs zookeeper and good to run on os depends on disk I/O. So if install it into k8s cluster, how? The performance will be worse?
And, if keep Kafka outside of the k8s cluster, connect Kafka from application inside the k8s cluster, how about that performance? They are in the different layer, won't be slow?
RabbitMQ
It's slow than Kafka, but for a daily activity user 500,000 application, is it good enough? If so, maybe it's a good choice.
Redis
It's another option. Maybe the most simple one. But from the internet I got that it will lose message sometimes. If true, that's terrible.
So, the most important thing is, use Kafka(also with zookeeper) on k8s, good or not in this use case?
Yes, running Kafka on Kubernetes is great. Check out this example: https://github.com/Yolean/kubernetes-kafka. It includes ZooKeeper and Kafka as StatefulSets.
PS. Running any of the services in your question on Kubernetes will be pleasant. You can Google the name of the service and "kubernetes" and find example manifests. Many examples here: https://github.com/kubernetes/charts.
For Kafka, you can find some suggestion here. Kubernetes 1.7+ supports local persistent volume, which may be good for Kafka deployment.
You can also take a look to the following project :
https://github.com/EnMasseProject/barnabas
It's about running Kafka on Kubernetes and OpenShift as well. It provides deploying with StatefulSets with persistent volumes or just in memory (for developing or just testing purpose). It provides deploying for Kafka Connect and Prometheus metrics as well.
Another simple configuration of Kafka/Zookeeper on Kubernetes in DigitalOcean with external access:
https://github.com/StanislavKo/k8s_digitalocean_kafka
You can connect to Kafka from outside of AWS/DO/GCE by regular binary protocol. Connection is PLAINTEXT or SASL_PLAINTEXT (user/password).
Kafka cluster is StatefulSet, so you can scale cluster easily.

How to deploy and use Redis in cloud foundry?

I am sort of new to cloud foundry. I have some queries -
Can I use REDIS as a service in Cloud Foundry , if yes , how. Do we need service broker as well for that.
Manifest file for deploying Redis on Cloud foundry in openstack Neutron.
Can I do HA of Redis service in CF.
I have been through these links as well
https://github.com/pivotal-cf/cf-redis-release
https://github.com/cloudfoundry-community/redis-boshrelease
and deployed redis with a dedicated node and broker but not sure how it will work with an app.
Yes, you can use Redis as a service in CF, and yes, you'll need to make sure that there is a service broker -- in fact, having a service broker is the definition of something being a CF Service (if you can write a service broker for it, you can use it as a service). Here's an overview of the CF Service Broker API. Once you have your Redis cluster and service broker set up, you'll need to do the following:
Register your service broker with cf create-service-broker redis-broker <username> <password> <url to service broker>.
Create a service instance: cf create-service redis <redis-plan-name> myRedis
Bind your app to the service instance: cf bind-service myApp myRedis
Building a manifest file depends on which Redis release you use. The cloudfoundry-community/redis-boshrelease has a template for generating an openstack manifest. Unfortunately, that release doesn't have a service broker so you can't use that redis as a service in CF. The pivotal-cf/cf-redis-release, on the other hand, does have a service broker. Maybe you can use the Openstack-specific properties from the cloudfoundry-community/redis-boshrelease to make an Openstack manifest for pivotal-cf/cf-redis-release?
I don't know too much about HA Redis. You'll have to get some help from Redis experts, but I do know that there's a piece of software called Sentinel that's meant to get Redis to HA. You should take a look at that and see if you can extend the release to include Sentinel.
Hope that helps!

Elasticache with Redis - Client sdks

I have a web farm in amazon and one of my sites need some caching.
I am considering the use of Elasticache redis.
Can anyone shed some ligth on how I would connect and interact with this cache?
I have read about several client sdks like stackexchange redis, service stack etc.
.NET is my preferred platform.
Can these client sdks be used to interact with redis on elasticache?
Anyone know about some documentation and/or code examples using elasticache redis (with the stackexchange redis sdk)?
Im guessing I will have to authenticate using a key / secret pair, is this supported in any of these client sdks?
thanks in advance!
Lars
Elasticache is connected to the same way you connect to any other Redis instance. Once you create a new Elasticache instance, you'll be given the hostname to connect to. No need for secret/key pair. All access to the Redis instance there is configured through security groups just like with other AWS instances in EC2, RDS, etc...
With that said, there are two important caveats:
You will only be able to connect to elasticache from within the region and/or VPC in which it's launched, even if you open up the security group to outside IPs (for me, this is one of the biggest reasons not to use Elasticache).
You cannot set a password on your Redis instance. Anyone on a box that is given access to the instance in security groups (keeping in mind the limitations from caveat 1) will be able to get access to your Redis instance with full rights to add/delete/modify whatever keys they like. This is the other big reason not to use Elasticache, though it certainly still has use-cases where these drawbacks are less important.