why the acl just update the current node in redis cluster when i use ACL SAVE and ACL LOAD? - redis

I create a redis cluster and try to use acl.
I want to make some user can aceess the special prefix.
But when i use acl load or acl save, it just save in the current node.
May I update users in ervery nodes by the redis-cli ?

The Redis cluster does not propagate configuration between its nodes automatically (as you've noted). This applies both to regular (redis.conf) and user (ACL) configuration directives.
You need to copy the ACL files to all nodes, or issue the same ACL SETUSER commands on each node.

Related

Where is the User's information stored when they are create via ACL SETUSER in Redis?

When I create a new user in redis using the acl setuser command, like the following:
acl setuser ankit on >generalpassword +#all -#dangerous ~*
Where is this information about the new user stored?
I checked the redis.conf file.
Is it stored in another file? If yes, which file is that?
The ACL database is stored in memory (RAM) and get lost if you restart Redis. To persist it to disk, you need to invoke the ACL SAVE command:
When Redis is configured to use an ACL file (with the aclfile
configuration option), this command will save the currently defined
ACLs from the server memory to the ACL file.

users created in redis delete if redis pod delete/restart

i have deployed a single/standalone redis container using bitnami/redis helm chart.
i created a new user in redis by connecting to redis using redis-cli and running command "ACL SETUSER username on allkeys +#all +SADD >password" and the user created is shown by ACL LIST.
Now if delete the redis pod, new pod will comeup and it doesnot have the user created above by me.
why this is happening ?
how to create permanent users in redis?
Assuming your redis implementation uses the regular redis implementation, then there are two options for ACL persistence:
in the main redis configuration file, or
in an external ACL configuration file
This is determined by whether you have an aclfile configuration option set. If you're using the first version, then CONFIG REWRITE should update the server configuration; if you're using the second version, ACL SAVE is what you need. If you don't issue either of these, your ACLs will be lost when the server next restarts.

Change original creator of EKS cluster

Is it possible to change the original creator of an EKS cluster to another role. I still have access to the cluster, with both the original creator role and the new one I want to transfer the cluster to.
The new role is now encoded in de aws_auth config map, but we locked ourselves out by deleting the config map (in a terraform update). We were able to restore it using the creator role, but we'd rather not use that one anymore.
Is it possible to update the creator user, or do I need to create a new cluster with the proper role, and then transfer everything over?
From the Amazon Docs:
You don't need to add cluster_creator to the aws-auth ConfigMap to get admin access to the Amazon EKS cluster. By default, the cluster_creator has admin access to the Amazon EKS cluster that it created.

Save user redis

I'm using Redis 6.2.5, and I'm facing some issues to save users.
It looks like it only works if I put the user in the redis.conf file. If I just create it with acl setuser username command and then restart the service, it loses the user information, even if I run the save or bgsave commands. Does anybody know a way to save the user definitely without editing the redis.conf file, or just add it in the memory but also on the redis.conf file, so, when it's restarted, the user will be there?
You can use CONFIG REWRITE command to rewrite the config file, so that your setting will be saved to config file. The next time, you start Redis with this config file, you'll get those user settings.
Also you can use an external ACL file to set ACL rules. If you want to change the settings, you can manually change the ACL file, and call ACL LOAD to reload the new configuration.
Check the doc for detail.

gcloud compute ssh with local key & project restrictions

We have a user that is allowed to SSH into an VM on the Google Cloud Platform.
His key is added to the VM and he can SSH using
gcloud compute ssh name-of-vm
However connecting in this way will always have gcloud try to update project wide meta data
Updating project ssh metadata...failed
It fails because he only has rights for accessing & administrating this VM
However it's very annoying that every time he has to connect in this way he has to to wait for GCP trying to update metadata, which is not allowed and then check the sshkeys on the machine.
Is there a flag in the command to skip checking/updating project wide ssh keys?
Yes we can 'block project wide ssh keys' on the instance, but that would mean that other project admins cannot log in anymore.
I've also tried to minimise access to this user.
But, ideally, what rights should he have if he is allowed to SSH to the machine, start & stop the instance and store data into a bucket?
What you can do is to enable-oslogin for all the users you need including admins, enabling OS Login on instances disables metadata-based SSH key configurations on those instances.
The role to start, stop and connect via SSH to an instance would be roles/compute.instanceAdmin (take in account that this role is currently in beta) you can check here a list of the Compute Engine roles available so you can choose the one that better suits your needs.
To store data into a bucket, I think the most suitable role is roles/storage.objectCreator that allows users to create objects but not to delete or overwrite objects.
I found this solution very useful.
Create a file called config under ~/.ssh
Add the following to it. Change nickname to anything you prefer, $IP_OF_INSTANCE to the public IP of the instance, and $USER to your machine username.
Host nickname
HostName $IP_OF_INSTANCE
Port 22
User $USER
CheckHostIP no
StrictHostKeyChecking no
IdentityFile ~/.ssh/google_compute_engine
Now, you can simply SSH using:
ssh nickname
Note that the path on Linux and Mac is ~/.ssh while the path on Windows is something like C:\Users\<user>\.ssh
Re: #1: There's no flag on the command to change this behavior on a per-command level instead of a per-instance level ('block-project-ssh-keys', as you mentioned) but you could file a FR at https://issuetracker.google.com/savedsearches/559662.