I am in the process of adding REDIS as a distributed cache to my application. When I run automated integration tests, I would like have each test start with a clean instance - so, create the DB if it does not exist, or clear it if it does.
When I do this for my Oracle instance, I just drop the configured user and recreate it, resulting in a clean slate. What would the REDIS equivalent be? The only way I have found to create DBs is to use the Web UI.
I believe you can do this, but (for obvious reasons) I have no intention to try it!
redis-cli flushall
Documentation here.
Related
I am using KeyCloak as my user management tool, and love it.
The data of Keycloak is stored for me on a Postgres database. Over time, more clients are being registered, and other alterations to the realms may be done. My question is: How do I properly keep track of that, and propagate automatically changes between my different environments? For databases, I use liquibase for a purpose like this. I couldn't find anything similar for the Keycloak case.
So, I wanted to ask: How are you folks out there handling this? What am I missing?
It depends on how you're doing the management of those changes. There are generally two approaches:
Using the Keycloak admin console
Using the Keycloak CLI
If you're applying your changes via the admin console, then you can either rely on the database backup or setup a scheduled pipeline in your CI tool to make an export of the Keycloak realm into a file and archive it somewhere.
In case you're using the second approach, then you can have a git repository containing all the Keycloak CLI scripts that you run on your server (e.g. to add a client, to update a realm config, etc.). In that case, you can have them reviewed, versioned and then run as part of an automated pipeline. This will also allow you to run a script on different environments. But of course it comes with a price which is to write a script for every single task that you can typically do in admin console with a couple of clicks.
I would like to ensure that clients cannot use Redis' Lua debugger.
I already have ACLs set to prevent use of SCRIPT and EVAL commands.
I see on https://redis.io/topics/ldb that remote debugging mode can be entered using a CLI flag. Is this also prevented by ACLs, or is there some other method to be aware of?
I do not use LDB at all, so disabling it at build time would be great, if possible.
Redis Lua debugger is a remote debugger, and it needs Redis server and client to work together.
You can disable the SCRIPT command, either by rename-command configuration or ACL. Although the client can enter the debugging mode, Redis server will refuse SCRIPT DEBUG subcommand, and the remote debugging won't work.
I do not use LDB at all, so disabling it at build time would be great, if possible.
You cannot disable it at build time, unless you modify the source code to remove the feature, and rebuild it.
I have implemented Redis cache with .net core 2.1 application. Now the issue is I have only development connection string. I want to configure and test Redis cache somehow on my local pc. I have read somewhere that it is possible using chocalatey. So can body refer me any link?
PS: When I tried to run redis cache from development server using vpn, It shown me popup to select "ResultBox.cs" file. So I created new ResultBox.cs file and give it the path, but when I call rediscache.Get() method it opens ResultBox.cs file but nothing happens then. Can anybody tell what is ResultBox.cs for?
I have found a way to configure Redis on local using chocolatey. Use this link. If you face Misconf issues while testing on redis-cli this link will be helpful.
You can run a local docker redis image. See this and this for reference.
I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?
I have an application with some cacheing backend and I want to clear the cacheing whenever the webserver is been restarted.
Is there a apache configuration directive or any other way to execute a shell script upon webserver (re)start?
Thanks,
Phil
Adding some more information, as asked by some answers already:
Base system is ofc linux based, in this exact situation: CentOs
Modifying the startup script is unfortunately no option as pointed out by one of the comments already, due to it beeing not configuration file within the respective RPM packages and therefor beeing replaced by updates. Also I think modifying the startup script would be a bad thing in general
I see, that actually linking both "restarting the webserver" and "clearing my app cache" is not exactly what should be tied together. I will consider other alternatives
My situation is as follows: I can define how the virtual host config looks like, but I can not define how the rest of the servers configuration looks like.
The application is actually PHP based (and runs on the symfony framework). Symfony pre-compiles alot of stuff into dynamic php files from what it finds in the static configuration files. We deploy our apps via RPM and after deployment, an webserver restart is actually initiated already, so I thought it might make sense to tie the cache-cleanup to it. But I think after getting all your feedback, it looks like it is better to put the cache cleanup process into the installation process itself.
You haven't provided a lot of detail here, so it's hard to give a concrete answer, but I would suggest that your best option is to write a script which handles restarting apache, and clearing your cache. It would look something like this:
#!/bin/sh
# restart apache
/etc/init.d/httpd graceful
# whatever needs to be done to clear cache
rm -rf /my/cache/dir
Ramy suggests modifying the system startup script for Apache -- this is a bad idea! If and when you update Apache on your server, there is a good chance that your change will be lost.
Dirk suggests that what you are trying to do is probably misguided, and I think he's right. You haven't told us what platform you are running, but I can think of few situations where restarting your webserver and clearing a cache actually need to happen together.
You can modify Startup script for the Apache Web Server in /etc/init.d/httpd and write your own syntax inside it.
chattr +i /etc/init.d/httpd
If you have (root) access to the server you could do this by shell scripts but I would consider if it is the best way of cache management to rely on apache restarts.