How to stop anonymous access to redis databases - redis

I run redis image with docker-compose
I passed redis.conf (and redis says "configuration loaded")
In redis.conf i added user
user pytest ><password> ~pytest/* on #set #get
And yet I can communicate with redis as anonymous
even with uncommented string
requirepass <password>
Redis docs about topics: Security and ACL do not answer how to restrict access to everyone. Probably I do not understand something fundamentally.
my docker-compose.yaml:
version: '3'
services:
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 6000s
timeout: 30s
retries: 50
restart: always
volumes:
- redis-db:/data
- redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf" ]
volumes:
redis-db:
redis.conf:

And yet I can communicate with redis as anonymous even with uncommented string
Because there's a default user, and you didn't disable it. If you want to totally disable anonymous access, you should add the following to your redis.conf:
user default off
Secondly, the configuration for user 'pytest' is incorrect. If you want to only allow user 'pytest' to have set and get command on the given key pattern, you should configure it as follows:
user pytest ><password> ~pytest/* on +set +get

You also need to ensure that the docker-compose is using your config file.
Assuming you have the redis.conf in the same directory as your docker-compose.yml the 'volumes' section in the service declaration would be.
- ./redis.conf:/usr/local/etc/redis/redis.conf
and also remove the named volume declaration in the bottom
redis.conf:
The users would be able to connect to Redis but without AUTH they can't perform any action if you enable
requirepass <password>
The right way to restrict GET, SET operations on the keys pytest/* would be
user pytest ><password> ~pytest/* on +set +get

Related

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

Redis Docker image using ACL

I am trying to test the new Redis 6 ACL configuration.
I want to run a test with the simplest configuration possible to get acquainted with the configuration.
My Redis will run as a Docker container. Please, consider that I am a Redis complete newbie.
My Dockerfile:
FROM redis:6.2.1
COPY redis.conf /usr/local/etc/redis/redis.conf
COPY users.acl /etc/redis/users.acl
EXPOSE 6379
My redis.conf file:
aclfile /etc/redis/users.acl
My users.acl file:
user test on >password ~* &* +#all
I am able to run a container without errors, but it seems that the container is not loading the ACL configuration: in fact, when I run redis-cli into the container and I execute ACL LIST, I get as output:
1) "user default on nopass ~* &* +#all"
which is clearly not as intended.
I fear I am missing something in the Dockerfile, but I cannot find a documentation suited for my needs.
Does someone have hints?
Thanks in advance.
As here clearly stated:
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
In the Dockerfile is missing one last line:
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]

How to set password for redis-server

I have a 3-instance high availability redis deployed. On each server I have redis and sentinel installed. I am trying to set a password
so that it requests it at the moment of entering with the command "redis-cli".
I am modifying the value of the "requirepass" parameter of the "redis.conf" file.
requirepass password123
Also inside the redis terminal, I am setting the password with the following commands
config set requirepass password123
auth password123
When I connect with the following command
redis-cli --tls --cert /<path>/redis.crt --key /<path>/redis.key --cacert /<path>/ca.crt -a password123
It works fine, my problem is when I restart the redis service, for some reason the password settings are not kept and I get the following message
Warning: AUTH failed
I do not know what configuration I need to do so that the change is maintained after restarting the redis service.
The version of redis that I have installed is "Redis server v=6.0.6"
Check your ACL configuration,Your requirepass configuration will be ignored with ACL operation. I get follow infomation from redis.conf example file.
IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
layer on top of the new ACL system. The option effect will be just setting
the password for the default user. Clients will still authenticate using
AUTH as usually, or more explicitly with AUTH default
if they follow the new protocol: both will work.
The requirepass is not compatable with aclfile option and the ACL LOAD
command, these will cause requirepass to be ignored.
config rewrite
This command will solve your issue of nopass after restart.
After setting the requirepass from redis cli.

How to have an idempotent Ansible playbook if we change the SSH port?

My playbook needs to change the ssh port and updates the firewall rules. (unfortunately, I cannot "get" a new server directly with the desired custom port).
Managing the change during the execution is easy.
However I do not know how to have an idempotent playbook.
The first run must be initiated on the default port (22).
The next runs must be initiated on the custom port.
A solution could be done but with performance issues.
Is there any other possibility with Ansible 2.0+?
You could approach this a couple of ways really.
The simplest way might be to simply separate the SSH port configuration into a separate playbook/role that specifies the SSH port as 22 but then your inventory would normally define the SSH port as your custom one.
ssh_port.yml
- hosts: all
vars:
ansible_ssh_port: 22
tasks:
- name: change the default ssh port
lineinfile ...
notify: restart ssh
handlers:
- name: restart ssh
service:
name : sshd
state: restarted
You would then only run this playbook on the creation of the machine and then only re-run your main playbook again and again, sidestepping the idempotency of this step.
Alternatively, as Mikko Ohtamaa pointed out in the comments, you could have Ansible modify your inventory file when you change the port. This will mean that you can run the whole thing end to end idempotently as the next run through will connect on the non default SSH port and then simply (pointlessly obviously) check that the SSH port is still set to the desired one. You can get at the inventory file by using the "magic variable" inventory_file. A rough example might look like this:
- name: change the default ssh port
lineinfile ...
notify: restart ssh
- name: change ssh port used by ansible
set_fact:
ansible_ssh_port: {{ custom_ssh_port }}
- name: change ssh port in inventory
lineinfile:
dest: inventory_file
insert_after: '[all:vars]'
line: 'ansible_ssh_port="{{ custom_ssh_port }}"'
Just make sure you have an inline group variables block for all in the inventory file and this will mean all future runs of any playbook against this inventory will connect to all of the hosts contained inside it on your custom SSH port.
If you use source control then you will also need a local_action task to push the change back to your remote.

Redis in docker-compose: any way to specify a redis.conf file?

my Redis container is defined as a standard image in my docker_compose.yml
redis:
image: redis
ports:
- "6379"
I guess it's using standard settings like binding to Redis at localhost.
I need to bind it to 0.0.0.0, is there any way to add a local redis.conf file to change the binding and let docker-compose to use it?
thanks for any trick...
Yes. Just mount your redis.conf over the default with a volume:
redis:
image: redis
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379"
Alternatively, create a new image based on the redis image with your conf file copied in. Full instructions are at: https://registry.hub.docker.com/_/redis/
However, the redis image does bind to 0.0.0.0 by default. To access it from the host, you need to use the port that Docker has mapped to the host for you which you find by using docker ps or the docker port command, you can then access it at localhost:32678 where 32678 is the mapped port. Alternatively, you can specify a specific port to map to in the docker-compose.yml.
As you seem to be new to Docker, this might all make a bit more sense if you start by using raw Docker commands rather than starting with Compose.
Old question, but if someone still want to do that, it is possible with volumes and command:
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
Unfortunately with Docker, things become a little tricky when it comes to Redis configuration file, and the answer voted as best (im sure from people that did'nt actually tested it) it DOESNT work.
But what DOES WORK, fast, and without husles is this:
command: redis-server --bind redis-container-name --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
You can pass all the variable options you want in the command section of the yaml docker file, by adding "--" in the front of it, followed by the variable value.
Never forget to set a password, and if possible close the port 6379.
Τhank me later.
PS: If you noticed at the command, i didnt use the typical 127.0.0.1, but instead the redis container name. This is done for the reason that docker assigns ip addresses internally via it's embedded dns server. In other words this bind address becomes dynamic, hence adding an extra layer of security.
If your redis container is called "redis" and you execute the command docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redis (for verifying the running container's internal ip address), as far as docker is concerned, the command give in docker file, will be translated internally to something like: redis-server --bind 172.19.0.5 --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
Based on David awnser but a more "Docker Compose" way is:
redis:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
That way, you include the .conf file by docker-compose.yml file and don't need a custom image.
mount your config /usr/local/etc/redis/redis.conf
add command to execute redis-server with your config
redis:
image: redis:7.0.4-alpine
restart: unless-stopped
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
########################################
# or using command if mount not work
########################################
command: >
redis-server --bind 127.0.0.1
--appendonly no
--save ""
--protected-mode yes
It is an old question but I have a solution that seems elegant and I don't have to execute commands every time ;).
1 Create your dockerfile like this
#/bin/redis/Dockerfile
FROM redis
CMD ["redis-server", "--include /usr/local/etc/redis/redis.conf"]
What we are doing is telling the server to include that file in the Redis configuration. The settings you type there will override the default Redis have.
2 Create your docker-compose
redisall:
build:
context: ./bin/redis
container_name: 'redisAll'
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./config/redis:/usr/local/etc/redis
3 Create your configuration file it has to be called the same as Dockerfile
//config/redis/redis.conf
requirepass some-long-password
appendonly yes
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.*
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
// and all configurations that can be specified
// what you put here overwrites the default settings that have the
container
I had the same problem when using Redis in docker environment that the Redis could not save data to disk on dump.rdb.
The problem was the Redis could not read the configurations redis.conf , I solve it by sending the required configurations with the command in docker compose as below :
redis19:
image: redis:5.0
restart: always
container_name: redis19
hostname: redis19
command: redis-server --requirepass some-secret --stop-writes-on-bgsave-error no --save 900 1 --save 300 10 --save 60 10000
volumes:
- $PWD/redis/redis_data:/data
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
- /etc/localtime:/etc/localtime:ro
and it works fine.
I think it will be helpful i am sharing working code in my local
redis:
container_name: redis
hostname: redis
image: redis
command: >
--include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"