Cassandra internode security kubernetes - ssl

I have a Cassandra cluster over Kubernetes deployed as stateful sets. Is it possible to implement security in Cassandra?

Yes. In your Docker entrypoint use sed to inject whichever auth classes you wish to use:
sed -i "s/authorizer: CassandraAuthorizer/authorizer: ${AUTHORIZER_CLASS}/" ${CASSANDRA_CONF}/cassandra.yaml
sed -i "s/authenticator: PasswordAuthenticator/authenticator: ${AUTHENTICATOR_CLASS}/" ${CASSANDRA_CONF}/cassandra.yaml
This will create the default cassandra/cassandra role, which you can use to create your new roles.
what about Cassandra Client-to-node encryption?
That's a little trickier. Basically, you need to inject the certificate or keystore and the keystore's password into your image (from a password/secret store or other secure location). Then, your entrypoint should add/update the following standard SSL settings to the end of the cassandra.yaml:
if [ "${SSL_ENABLED}" == "true" ]; then
cat << EOF >> ${CASSANDRA_CONF}/cassandra.yaml
client_encryption_options:
enabled: ${SSL_ENABLED}
optional: ${SSL_OPTIONAL}
keystore: ${CASSANDRA_CONF}/server-keystore.jks
keystore_password: ${KEYSTORE_PASSWORD}
EOF
fi
If you also require two-way SSL (will need require_client_auth: true set) or Node-to-Node encryption, you'll also need to inject or build a Java truststore in the same manner.

Related

Custom path for Hashicorp Vault Kubernetes Auth Method does not work uisng CLI

When I enable kubernetes auth method at default path (-path=kubernetes) it works. However, if it is enabled at custom path, the vault init and sidecar containers don't start.
kubernetes auth method enable at auth/prod
vault auth enable -path=prod/ kubernetes
vault write auth/prod/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=#/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
vault write auth/prod/role/internal-app \
bound_service_account_names=internal-app \
bound_service_account_namespaces=default \
policies=internal-app \
ttl=24h
What could be wrong with these auth configurations?
Not sure how you have deployed the vault but if your injector is true
injector:
enabled: true
vault will be injecting the sidecars and init container. You should check the logs of side car or init container which is failing.
If you are using the K8s method to authenticate you should check out below annotation example and use them
annotations:
vault.hashicorp.com/agent-image: registry.gitlab.com/XXXXXXXXXXX/vault-image/vault:1.4.1
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-secrets: kv/secret-path-location
vault.hashicorp.com/auth-path: auth/<K8s-cluster-auth-name>
vault.hashicorp.com/role: app
You can keep the multiple auth-path for different K8s clusters to authenticate with a single vault instance also.
If the vault is injecting the sidecar you should check the logs of it.
https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar

How to set password for redis-server

I have a 3-instance high availability redis deployed. On each server I have redis and sentinel installed. I am trying to set a password
so that it requests it at the moment of entering with the command "redis-cli".
I am modifying the value of the "requirepass" parameter of the "redis.conf" file.
requirepass password123
Also inside the redis terminal, I am setting the password with the following commands
config set requirepass password123
auth password123
When I connect with the following command
redis-cli --tls --cert /<path>/redis.crt --key /<path>/redis.key --cacert /<path>/ca.crt -a password123
It works fine, my problem is when I restart the redis service, for some reason the password settings are not kept and I get the following message
Warning: AUTH failed
I do not know what configuration I need to do so that the change is maintained after restarting the redis service.
The version of redis that I have installed is "Redis server v=6.0.6"
Check your ACL configuration,Your requirepass configuration will be ignored with ACL operation. I get follow infomation from redis.conf example file.
IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
layer on top of the new ACL system. The option effect will be just setting
the password for the default user. Clients will still authenticate using
AUTH as usually, or more explicitly with AUTH default
if they follow the new protocol: both will work.
The requirepass is not compatable with aclfile option and the ACL LOAD
command, these will cause requirepass to be ignored.
config rewrite
This command will solve your issue of nopass after restart.
After setting the requirepass from redis cli.

How to remove certificate from Traefik acme storage when saved to consul KV

I have Traefik running with a Consul KV store. How do I remove a record from the acme certificate storage in Consul, or force a renewal for just one domain/frontend?
Problem:
Somehow one of the frontend domains has saved with the wrong certificate. It's referencing a certificate from a different domain (which is also a frontend in Traefik).
I was able to inspect the acme json by getting the consul value for the traefik/acme/account/object key, decode and unzip it and this is the record from the Certs array:
{
"Domains":{
"Main":"my.domain1.com",
"SANs":null
},
"Certificate":{
"Domain":"my.domain2.com",
"CertURL":"https://acme-v02.api.letsencrypt.org/acme/cert/idfordomain2",
"CertStableURL":"https://acme-v02.api.letsencrypt.org/acme/cert/idfordomain2",
"PrivateKey":"...",
"Certificate":"..."
}
}
As you can see, somehow the cert for my.domain2.com has been saved against the record for my.domain1.com so this results in an invalid certificate warning in the browser. I want to clear out the whole record so Traefik will get a fresh cert. I'm using Consul and its saved in binary so I can't just edit the json.
Here is how I solved this issue:
Your traefik network should be marked as attachable: true
Run on host:
docker run -it --rm --name consul-client --network traefik_traefik consul sh
Then run in created container:
export CONSUL_HTTP_ADDR=consul:8500
# get value from consul and store it to acme.json
consul kv get traefik/acme/account/object | gzip -dc > acme.json
# remove invalid domain and store it to acme-fixed.json
cat acme.json | jq -r 'del (.DomainsCertificate.Certs[] | select(.Domains.Main=="'yourdomain.com'"))' > acme-fixed.json
# gzip it
cat acme-fixed.json | gzip -c > acme-fixed.json.gz
# upload fixed and gzipped json back to consul
consul kv put traefik/acme/account/object #acme-fixed.json.gz
Simplest way is to use consul CLI utility. The utility is also used to run server and ideally you should use same version as the one used for your servers. Make sure you export environment variables: CONSUL_HTTP_ADDR - points to consul server, default is http://127.0.0.1:8500 and CONSUL_HTTP_TOKEN - is ACL token, if you have ACLs on your server enabled, as it should be on production environments.
Then you just run following command
consul kv put traefik/acme/account/object #traefik.json
Where trafik.json is json file that has updated values, that you wish to change in Consul KV store.
Or you can use HTTP API: Consul Create/Update Key
curl -X PUT --data #traefik.json http://<your-server-url>:<port>/v1/kv/traefik/acme/account/object
If your server is ACL enabled, you need to add following header to curl request, with <your-acl-token> that was issued to you: -H "X-Consul-Token: <your-acl-token>"

Setting up a Docker registry with Letsencrypt certificate

I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.

Docker: What is the simplest way to secure a private registry?

Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.