I renamed the default my-kubernetes-account Kubernetes provider credential name by GCP Spinnaker deployment and now I ended up with having both on my spinnaker UI.
I tried to clear my browser's local storage or even remove it from application -> config -> Application Attributes -> Accounts, but none of them helped.
Is there any way to re-index providers or remove it in some way?
Update:
Would it help to remove all the related keys in redis, like:
redis-cli KEYS "*my-kubernetes-account:*" | xargs redis-cli DEL
Or is it a tottaly bad idea? :)
Yes, flushing the redis db is the correct way to remove the duplicate entries.
Any of the cached infrastructure details that you remove from redis will be automatically recreated by the caching agents.
Thanks,
-Matt
Related
i have a redis instance which is installed as a linux service. I have created few users with some RBAC policies. Everything works fine as expected for few days, but suddenly all my newly created users get deleted due to which my application connecting to redis is throwing exception.
Also, no other person can access this server except me.
Can someone help me how to persist the newly created users in redis for lifetime?
Everything works fine as expected for few days, but suddenly all my newly created users get deleted
After ruling out other trivial reasons (connecting to the wrong instance, for example), I believe your Redis service was simply restarted for some reason, perhaps after a server restart.
Once an ACL rule is created/modified/updated, the configuration needs to persisted to a file to make it survive a Redis restart; to do that, run either:
CONFIG REWRITE, if you are specifying your ACL users / rules inside your main configuration file (the default option);
ACL SAVE, if you are using an external ACL file.
To learn more about how Redis deals with ACLs, be sure to check out the official documentation.
i have deployed a single/standalone redis container using bitnami/redis helm chart.
i created a new user in redis by connecting to redis using redis-cli and running command "ACL SETUSER username on allkeys +#all +SADD >password" and the user created is shown by ACL LIST.
Now if delete the redis pod, new pod will comeup and it doesnot have the user created above by me.
why this is happening ?
how to create permanent users in redis?
Assuming your redis implementation uses the regular redis implementation, then there are two options for ACL persistence:
in the main redis configuration file, or
in an external ACL configuration file
This is determined by whether you have an aclfile configuration option set. If you're using the first version, then CONFIG REWRITE should update the server configuration; if you're using the second version, ACL SAVE is what you need. If you don't issue either of these, your ACLs will be lost when the server next restarts.
I am in the process of adding REDIS as a distributed cache to my application. When I run automated integration tests, I would like have each test start with a clean instance - so, create the DB if it does not exist, or clear it if it does.
When I do this for my Oracle instance, I just drop the configured user and recreate it, resulting in a clean slate. What would the REDIS equivalent be? The only way I have found to create DBs is to use the Web UI.
I believe you can do this, but (for obvious reasons) I have no intention to try it!
redis-cli flushall
Documentation here.
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.
Is there any way to secure redis keys, even the person who knows redis-server password that can not access or see redis keys.
I need this security because i am storing session in redis keys.
For each user there is unique key which is stored in redis as key.
If user knows keys then he can access any account.
Any suggestions.
Thanks.
You can create a hash key, e.g. md5, for each session id, and take the hash key as Redis key.
// set
session_id_md5 = md5(session_id)
redis.set(session_id_md5, value);
When you want to get session info from Redis, re-create the hash key with session id, and search Redis with the hash key
// get
session_id_md5 = md5(session_id);
redis.get(session_id_md5);
Redis isn't designed to be used the way you are using it so this isn't possible, you should move your authentication up a level to your application.
Redis doesn't provide any ACLs, so restricting access to KEYS command for certain clients isn't possible without some additional middleware. But if you want to just disable KEYS command, add following line to your redis config:
rename-command KEYS ""
You probably should also disable MONITOR and CONFIG commands:
rename-command CONFIG ""
rename-command MONITOR ""
Thanks to all of you for your suggestions
I conclude with the following.
I want to provide security among developers not end users. we have a team of few person who has access of redis server to start and stop service and other configuration, I want to restrict those users from accessing keys.
As Redis doesn't have ACLs so i think we can't do such things. Still Anybody have any other idea then they are most welcome :)