i have deployed a single/standalone redis container using bitnami/redis helm chart.
i created a new user in redis by connecting to redis using redis-cli and running command "ACL SETUSER username on allkeys +#all +SADD >password" and the user created is shown by ACL LIST.
Now if delete the redis pod, new pod will comeup and it doesnot have the user created above by me.
why this is happening ?
how to create permanent users in redis?
Assuming your redis implementation uses the regular redis implementation, then there are two options for ACL persistence:
in the main redis configuration file, or
in an external ACL configuration file
This is determined by whether you have an aclfile configuration option set. If you're using the first version, then CONFIG REWRITE should update the server configuration; if you're using the second version, ACL SAVE is what you need. If you don't issue either of these, your ACLs will be lost when the server next restarts.
Related
i have a redis instance which is installed as a linux service. I have created few users with some RBAC policies. Everything works fine as expected for few days, but suddenly all my newly created users get deleted due to which my application connecting to redis is throwing exception.
Also, no other person can access this server except me.
Can someone help me how to persist the newly created users in redis for lifetime?
Everything works fine as expected for few days, but suddenly all my newly created users get deleted
After ruling out other trivial reasons (connecting to the wrong instance, for example), I believe your Redis service was simply restarted for some reason, perhaps after a server restart.
Once an ACL rule is created/modified/updated, the configuration needs to persisted to a file to make it survive a Redis restart; to do that, run either:
CONFIG REWRITE, if you are specifying your ACL users / rules inside your main configuration file (the default option);
ACL SAVE, if you are using an external ACL file.
To learn more about how Redis deals with ACLs, be sure to check out the official documentation.
I create a redis cluster and try to use acl.
I want to make some user can aceess the special prefix.
But when i use acl load or acl save, it just save in the current node.
May I update users in ervery nodes by the redis-cli ?
The Redis cluster does not propagate configuration between its nodes automatically (as you've noted). This applies both to regular (redis.conf) and user (ACL) configuration directives.
You need to copy the ACL files to all nodes, or issue the same ACL SETUSER commands on each node.
Recently have tried to deploy redis-cluster on kubernetes cluster using helm chart. I am following below links--
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
For helm deployment have used values-production.yaml. the default deployment went successful and able to create three node redis cluster (three master and three slave).
I am checking on two things currently:
How to enable container logs, as per the official docs, it should be written in "/opt/bitnami/redis/logs" but haven't seen any logs here.
From the official docs got to know, that in redis.conf log file name should be mention but currently it is "" Empty string, not sure how to and where to pass log file so that it should come in redis.conf.
I have tried to enable tls as well.. Have generated the certificates mentioned as per the redis.io/tls official docs. After than I have created the secret key mentioned in bitnami/tls section and passed the certificates in secret key.
Then I have passed the secret key name and all the certificates in values-production.yaml, then deployed the helm chart and it was giving me permission denied error msg.. For libfile.Sh in line number 37...
When I have checked the pod status, out of 6 pods three pods are in running 2/2 state and 3 pods in 1/2 crash loopback off state.
After logging on running pod able to verify that certificates got placed at location "/opt/bitnami/redis/certs/", and changes also got reflected in redis.conf file for the certificates...
Pls let me know how to make any configuration changes in redis.conf file using bitnami redis helm chart and how to resolve above two issue??
My understanding is for any redis.conf related changes, I have to pass values in values-production.yaml file... Pls let me know on this..thank you.
Bitnami developer here
My first recommendation for you is to open an issue at https://github.com/bitnami/charts/issues if you are struggling with the Redis Cluster chart.
Regarding the logs, as it's mentioned at https://github.com/bitnami/bitnami-docker-redis-cluster#logging:
The Bitnami Redis-Cluster Docker image sends the container logs to stdout
Therefore, you can simply access the logs by running (substitute "POD_NAME" with the actual name of any of you Redis pods):
kubectl logs POD_NAME
Finally, with respect to the TLS configuration, I guess you're following this guide right?
I have created new app on OpenShift using this image: https://hub.docker.com/r/luiscoms/openshift-rabbitmq/
It runs successfully and I can use it. I have added a persistent volume to it.
However, every time a POD is restarted, I loos all my data. This is because RabbitMq uses a hostname to create database directory.
For example:
node : rabbit#openshift-rabbitmq-11-9b6p7
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : BsUC9W6z5M26164xPxUTkA==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbit#openshift-rabbitmq-11-9b6p7
How can I set RabbitMq to always use same database dir?
You should be able to set an environment variable RABBITMQ_MNESIA_DIR to override the default configuration. This can be done via the OpenShift console by add an entry to environment in the deployment config or via the oc tool, for example:
oc set env dc/my-rabbit RABBITMQ_MNESIA_DIR=/myDir
You would then need to mount the persistent volume inside the Pod at the required path. Since you have said it is already created, then you just need to update it, example:
oc volume dc/my-rabbit --add --overwrite --name=my-pv-name --mount-path=/myDir
You will need to make sure you have correct r/w access on the provided mount path
EDIT: Some additional workarounds based on issues in comments
The issues caused by the dynamic hostname could be solved in a number of ways:
1.(Preferred IMO) Move the deployment to a StatefulSet. StatefulSet will provide stability in the naming and hence network identifier of the Pod, which must be fronted by a headless service. This feature is out of beta as of Kubernetes 1.9 and tech preview in OpenShift since version 3.5
Set the hostname for the Pod if Statefulsets are not an option. This can be done by adding the environment variable oc set env dc/example HOSTNAME=example to make the hostname static and setting RABBITMQ_NODENAME to do likewise.
I was able to get it to work by setting the HOSTNAME environment variable. OSE normally sets that value to the pod name, so it changes everytime the pod restarts. By setting it the pod's hostname doesn't change when the pod restarts.
Combined with a Persistent Volume the the queues, messages users and i assume whatever other configuration is persisted through pod restarts.
This was done on an OSE 3.2 server. I just added an environment variable to the deployment config. You can do it through the UI or with the OC CLI:
oc set env dc/my-rabbit HOSTNAME=some-static-name
This will probably be an issue if you run multiple pods for the service, but in that case you would need to setup proper RabbitMq clustering, which is a whole different beast.
The easiest and production-safest way to run RabbitMQ on K8s including OpenShift is the RabbitMQ Cluster Operator.
See this video on how to deploy RabbitMQ on OpenShift.
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.