I am using KeyCloak as my user management tool, and love it.
The data of Keycloak is stored for me on a Postgres database. Over time, more clients are being registered, and other alterations to the realms may be done. My question is: How do I properly keep track of that, and propagate automatically changes between my different environments? For databases, I use liquibase for a purpose like this. I couldn't find anything similar for the Keycloak case.
So, I wanted to ask: How are you folks out there handling this? What am I missing?
It depends on how you're doing the management of those changes. There are generally two approaches:
Using the Keycloak admin console
Using the Keycloak CLI
If you're applying your changes via the admin console, then you can either rely on the database backup or setup a scheduled pipeline in your CI tool to make an export of the Keycloak realm into a file and archive it somewhere.
In case you're using the second approach, then you can have a git repository containing all the Keycloak CLI scripts that you run on your server (e.g. to add a client, to update a realm config, etc.). In that case, you can have them reviewed, versioned and then run as part of an automated pipeline. This will also allow you to run a script on different environments. But of course it comes with a price which is to write a script for every single task that you can typically do in admin console with a couple of clicks.
Related
I am learning puppet and am trying to write modules to install services such as tigervnc and openvpn.
The problem is that for tigervnc requires the initial password setting by the user. I have tried using:
"exec {'/usr/bin/echo password | /usr/bin/vncpasswd > ~/.vnc/passwd"
This works if I run it on the command line if I'm logged in as the user but does not work when run via puppet.
The problem with openvnc is that it requires a lot of user interaction for the default settings for certificate generation/certificate authority and key generation.
I have tried using execs with the "pkitool" methods which work to a point but not very well or stable. I am also wary of using many execs if there is a better way to do it.
So to sum up my main question is how to deal with these user interactions when trying to automate installations with puppet, and is there a better way than running lots of execs which to me seem like a last resort ?
Thanks
If setting up a piece of software requires user interaction, I don't really see a way around exec. Keeping its use to a minimum is indeed a sensible design goal.
An economic approach is to
create a script that does all the necessary lifting that Puppet resources cannot perform
make Puppet deploy that script to the agent
run it at appropriate times via exec (along with good creates or onlyif queries)
Scripts that run installation wizards that rely on interactive input should probably rely on expect and friends.
We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.
I would like Jenkins to comment whether a merge passes or fails (much like Travis CI) on Github pull requests. I understand this is a feature on BuildHive. However, I cannot find an option on BuildHive for using customer provided slaves. My question is twofold:
Is there an option to limit builds to customer provided slaves on BuildHive?
Is there a way I could enable comments on pull requests using DEV#cloud (the actual job must be run on a customer provided slave)? If so, could you point me in the right direction to get this set up?
DEV#cloud can validate pull request as BuildHive does, with some additional configuration. See http://wiki.cloudbees.com/bin/view/DEV/Github+Pull+Request+Validation
Answering in the order of your questions:
BuildHive uses the Validated Merge plugin for Git from Jenkins Enterprise to enable Jenkins to perform pull requests and run the builds before doing a push to the main repo. That said, currently you cannot use Customer Provided Executors with BuildHive.
DEV#cloud: Normally, all Jenkins Enterprise plugins are available in a paid tier of DEV#cloud. However, this plugin is not - as the plugin sets up a git server within Jenkins - not easily achievable in a cloud setup. I have created a ticket on CloudBees support requesting that the plugin be made available and the engineering team will investigate into delivering the feature.
Meanwhile, if you like you can use Jenkins Enterprise to use the feature (however it is an on-premises solution).
I have a "central" Mercurial repository, which configured to use HTTPS and requires authentication to clone-pull-push changes. Developers has their own repositories on their computers. They configure their local settings freely, and for example add section like
[ui]
username = anyname
to their local mercurial.ini file.
When a user try to push his changes to the "central" repository, he authenticates, but authentication info is not stored in Mercurial. Mercurial store locally configured username as revisions author in central repository. So I cannot find who really made changes in central repository, but I strongly wish to do it. Mercurial developers does not care about it and consider this behavior to be correct.
But I want to keep authentication info near changesets. I think the best way to do it is add one more additional field in revision description, like "pusher id" and store there authentication data.
Extensions I found do not implement similar functionality. Can you give me info about some third-party extensions, hooks, or just code templates or ideas how to do it? (I'm absolutly new in Python)
The fundamental problem that makes Mercurial developers (like myself) reject this is that changesets are immutable. It is impossible for a server to add extra information to the changesets when they are pushed.
More concretely: a changeset is identified by it's changeset hash. This hash is computed based on all the information the changeset contains, such as username, date, commit message, and the change itself. You cannot change any part of this, without also changing the changset hash — otherwise the integrity of the repository is destroyed.
This gives you security against accidental (or malicious!) changes made on the server: if Alice and Bob talk about "changeset X", then they can be sure they really mean the same thing. If the server (or someone else) could change the content of a changeset without affecting the ID, then Alice and Bob would not be guaranteed that "X" really means the same
thing in both their repositories. This property is of course also fundamental to the way Mercurial works when synchronizing repositories.
You have two options here:
You can let the server reject a push if Alice tries to push a changeset with Bob's name in it. This is can be done with a pretxnchangegroup hook on the server. It will inspect the HG_SOURCE environment variable and verify that the user listed there is also the committer of all pushed changesets between HG_NODE and tip.
You can let the server log the pusher. This is called a "pushlog". The Mozilla project uses one and the source appears to be here. There you make your server store information about who pushed what. This is done in a changegroup hook that logs the necessary information in a small database.
If you want a push log, then take a look at Kallithea, which has this functionality built in. Kallithea is in general a great way to host Mercurial repositories! It has much more functionality than the normal hgweb CGI script.
I have various projects being built and tested periodically on a Hudson server, but I don't want every employee in the company to see published artifacts for every project.
Project-based matrix security seemed at first the key, but after many tests I find that granting overall read permissions is mandatory if you want users to be able to read anything in the hudson server.
So, in the end read permissions are binary: either you grant global read permission or you block everything, am I right?
Haven't it tested with the newest release, but I use the matrix setup. I gave Anonymous the overall read. This way they can see the login screen when they type {{http://servername:port/}} but does not give them access to the jobs. In the jobs themselves I configured the users that should actually see the job. Works like a charm.
UPDATE:
Meanwhile I found out that you can use authenticated instead of Anonymous. This enabled access to Hudson/Jenkins through the links in the Build failed messages. Now everyone gets the logon dialog and after signing in, they are right away at the job run of interest.
After trying to do something similar to you with Hudson's authorization settings, I came to the same conclusion you did.