Redis users getting deleted automatically - redis

i have a redis instance which is installed as a linux service. I have created few users with some RBAC policies. Everything works fine as expected for few days, but suddenly all my newly created users get deleted due to which my application connecting to redis is throwing exception.
Also, no other person can access this server except me.
Can someone help me how to persist the newly created users in redis for lifetime?

Everything works fine as expected for few days, but suddenly all my newly created users get deleted
After ruling out other trivial reasons (connecting to the wrong instance, for example), I believe your Redis service was simply restarted for some reason, perhaps after a server restart.
Once an ACL rule is created/modified/updated, the configuration needs to persisted to a file to make it survive a Redis restart; to do that, run either:
CONFIG REWRITE, if you are specifying your ACL users / rules inside your main configuration file (the default option);
ACL SAVE, if you are using an external ACL file.
To learn more about how Redis deals with ACLs, be sure to check out the official documentation.

Related

Apache running as root vs. new user

From the Apache documentation, I read that Apache needs to initially run as root to then switch to the user defined by the User directive to serve requests.
However, I also read, still from the Apache documentation, that the recommended strategy is to create a new user and a new group specific for running the server.
This is a bit confusing for me. If Apache needs to run as root, why do I need a new user? Does it refer to the webmaster running the server? Because, otherwise, the two statements look a bit contradictory to me.
Let me quote from your own question:
I read that Apache needs to initially run as root to then switch to the user defined by the User directive to serve requests.
Correct.
So that implies there needs to be a different (not root) user account to switch to ...
However, I also read, still from the Apache documentation, that the recommended strategy is to create a new user and a new group specific for running the server.
Again, correct.
The recommendation is to create a user and group specifically for Apache rather than using some existing user / group.
No contradiction so far.
If Apache needs to run as root, why do I need a new user?
Apache needs to start as root.
Then it needs to switch to a different user.
Why?
It needs to start as root, because some of the initial setup can only be performed while the process has elevated privileges.
It needs to change to a different account because it is unsafe to continue running with elevated privileges. Why? because if hackers can find an exploit in the Apache process running as root, then they have achieved a root compromise. (That's a hack of the worst kind ...)
In short, there is no contradiction.

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Redis Cache Share Across Regions

I've got an application using redis for cache, it works well so far. However we need spread our applications to different regions(thru dynamic DNS dispatcher via user locations, local user could visit nearest server).
Considering the network limitation and bandwith, it's not likely to build a centralised redis. So we have to assign different redis for different regions. So the problem here is how can we handle the roaming case. User opens the app in location 1, while continuing using the app in location 2 without missing the cache in location1.
You will have to use a tiered architecture. This is how most CDNs like Akamai, or Amazon Cloudfront work.
Simply put, this is how it works :
When a object is requested, see if it exists in the redis cache server S1 assigned for location L1.
If it does not exist in S1, check whether it exists in caching servers of other locations i.e. S2,S3....SN.
If it is found in S2...SN, store the object in S1 as well, and serve the object.
If not found in S2...SN as well, fetch the object fresh from backend, and store in S1.
If you are using memcached for caching, then facebook's open-source mcrouter project will help, as it does centralized caching.

Can I change gerrit authentication type from openid to ldap?

We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.

Not able to backup the log files during instance termination issued by Auto Scaling Policy

I am having EC2 instances with auto scaling enabled on it.
Now as part of scale down policy when one of the instance is issued termination, the log files remaining on that instance need to be backed up on s3, but I am not finding any way to perform s3 logging of log files for that instance. I have tried putting the needed script in rc0.d directory through chkconfig with highest priority. I also tried to put my script in /lib/systemd/system/halt.service (or reboot.service or poweroff.service), but no luck till now.
I have found some threads related to this on stack overflow and AWS forum but no proper solution found till now.
Can any one please let me know the solution to this problem?
The only reliable way I have found of achieving this behaviour is to use rsyslog/syslog to transfer the log files to a central host as soon as they are written to the syslog subsystem.
This means you will need to run another instance that receives the log files and ships them to S3, or use an SQS-based system such as logstash.
Unfortunately there is no other way to ensure all of your log messages will be stored on S3 - you can not guarantee that your script will finish before autoscaling "pulls the plug".