How to secure Redis Keys? - redis

Is there any way to secure redis keys, even the person who knows redis-server password that can not access or see redis keys.
I need this security because i am storing session in redis keys.
For each user there is unique key which is stored in redis as key.
If user knows keys then he can access any account.
Any suggestions.
Thanks.

You can create a hash key, e.g. md5, for each session id, and take the hash key as Redis key.
// set
session_id_md5 = md5(session_id)
redis.set(session_id_md5, value);
When you want to get session info from Redis, re-create the hash key with session id, and search Redis with the hash key
// get
session_id_md5 = md5(session_id);
redis.get(session_id_md5);

Redis isn't designed to be used the way you are using it so this isn't possible, you should move your authentication up a level to your application.

Redis doesn't provide any ACLs, so restricting access to KEYS command for certain clients isn't possible without some additional middleware. But if you want to just disable KEYS command, add following line to your redis config:
rename-command KEYS ""
You probably should also disable MONITOR and CONFIG commands:
rename-command CONFIG ""
rename-command MONITOR ""

Thanks to all of you for your suggestions
I conclude with the following.
I want to provide security among developers not end users. we have a team of few person who has access of redis server to start and stop service and other configuration, I want to restrict those users from accessing keys.
As Redis doesn't have ACLs so i think we can't do such things. Still Anybody have any other idea then they are most welcome :)

Related

Generate SSH key for Public Key Authentication

Customer is changing SFTP to different location and written to me.
Changing SFTP-server to a more modern service at AWS.
Would be to secure the new username with a SSH key pair,
as we’re trying to get rid of all the password usage in the new service.
Could you deliver us a public key for this?
I have no understanding of this.
What actually I need to do here? Is it following ssh-keygen command? Do i need to share QA and Production separately?
https://phoenixnap.com/kb/ssh-with-key

How is GitLab/GitHub authentication separated from an ordinary SSH-session?

I read the question: How does the GitHub authentification work? and https://unix.stackexchange.com/questions/315615/is-ssh-public-key-associated-with-a-user Which is exactly what I am wondering. I am still missing a better answer.
When I test my SSH-key-pair I connect to user git#gitlab.com. My stored Public key has a fingerprint of base64. When the SSH Client(me) want to connect to the server(My gitlab/github account server) it sends its ID(fingerprint), the server checks it ".ssh/authorised_keys" and loops through the Fingerprints after the correct public key to encrypt the challenge.
On Github/Gitlab there are several thousand of users, they all use the same username ("git") to initiate a web (SaaS)session. So how is this separated on the server? I don't get root access on gitlab/github, of course. I only get access to my account though the generic user-session git#gitlab.com. But how is this implemented?
When I use SSH in other situations I have a specific username which I use to [my-username]#router.com
E.g.
If I would set up my own GitLab on a local NAS/Server. How can I create an account (User#local-gitlab.com) but the access rights are limited to the Fingerprint of the differents users SSH-key-pairs?
User: ID:001
User: ID:002
User: ID:003
Somehow I need to limit the access for ID:001 when he/she initiate a ssh-session with my server on account "User".
I can't speak for GitLab, but for GitHub, there is a dedicated service that terminates these connections, contacts the authentication service with the key in question, and then receives the response about whether the user is allowed to access that repo, and if so, contacts the servers storing the data.
GitHub has more than 65 million users, many users have multiple SSH keys, and there are also deploy keys for servers, so using the command directive with an OpenSSH authorized_keys file would be extremely slow, since it would involved parsing and reading probably gigabytes of data each time a connection was made.
If you need this yourself for a small set of users, the command directive in authorized_keys is a viable approach. If you need something more scalable, you can create a custom server with something like libssh and perform authentication yourself, either in that process, or in a separate process.
I found this question+answer: https://security.stackexchange.com/questions/34216/how-to-secure-ssh-such-that-multiple-users-can-log-in-to-one-account. Which highlights that you can put restrictions on authorised_keys. Don't know if that provides precise answer for my question, but it looks like it.
command="/usr/local/bin/restricted-app",from="192.0.2.0/24",no-agent-forwarding,no-port-forwarding,no-x11-forwarding ssh-rsa AAAA… git#gitlab.com
I guess there is several thousand of those lines at gitlabs/githubs servers in .ssh/authorized_keys where every single line points out access to only that gitlab/hub account.
Please comment if you don't agree.

Flush caches after renaming Kubernetes provider account

I renamed the default my-kubernetes-account Kubernetes provider credential name by GCP Spinnaker deployment and now I ended up with having both on my spinnaker UI.
I tried to clear my browser's local storage or even remove it from application -> config -> Application Attributes -> Accounts, but none of them helped.
Is there any way to re-index providers or remove it in some way?
Update:
Would it help to remove all the related keys in redis, like:
redis-cli KEYS "*my-kubernetes-account:*" | xargs redis-cli DEL
Or is it a tottaly bad idea? :)
Yes, flushing the redis db is the correct way to remove the duplicate entries.
Any of the cached infrastructure details that you remove from redis will be automatically recreated by the caching agents.
Thanks,
-Matt

How can I find the last access time of a redis key?

In redis, two of the eviction policies, allkeys-lru and volatile-lru, evict keys based on access time. So, this information must exist somewhere. Is it possible for me to query the access time of a key? Or, better yet, page through a sorted list of keys based on access time?
Look at Object IDLETIME it gives time for which the object was idle
as guided by #Itamar Haber the way they disable some command is by using redis.conf
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command DEBUG ""
As you are using Redis as a service on Heroku you have to have admin rights to do this
Hope this helps!

Using 2 public/private key pairs at the "same" time

So I have 2 public/private key pairs (id_rsa and id_rsa.pub - one of them is sitting in a "key_backup" folder I made currently), one for GitHub and one for passwordless SSH'ing into a cluster. I looked around Google and could only find guides on how to use two public keys at the same time.. does the same hold for private keys?
How can I maintain authentication w/ GitHub while also being able to maintain passwordless login with my cluster?
Thanks!
-kstruct
You can use multiple private keys at the same time by making sure that your ssh key agent knows about both keys: ssh-add id_rsa1 id_rsa2 on Mac OS or Linux, or add both to Pageant on Windows.
The other option would be to create separate Host entries in ~/.ssh/config that points each of your two keys at their intended uses.