Set up new extensions in Keycloak kubernetes helm charts - authentication

I have a Kubernetes cluster on Azure, where I use Helm to make it easier to manage micro-services and other tools on it, and Keycloak is one of them.
I need to use magic link authenticator in one of my apps, I'm aware that I need to add an extension in my Keycloak chart, but I don't know how.
In the image repository I'm using, they explain how to add custom themes, via extraInitContainers param on chart configuration. I think I can achieve what I want through it.
In this tutorial they say that's the extension, but I have no idea how to add this to my Keycloak instance on k8s by using helm charts. How do I achieve that?
Just more info about my config, I'm running louketo-proxy(as a side car) on some apps where I want to protect.

To publish a theme with original image, first create an archive with the thema.
Create a file custom-themes-values.yml with a content:
extraInitContainers: |
- name: theme-provider
image: busybox
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
echo "wgetting theme from maven..."
wget -O /theme/keycloak-theme.jar https://repo1.maven.org/maven2/org/acme/keycloak-theme/1.0.0/keycloak-theme-1.0.0.jar
volumeMounts:
- name: theme
mountPath: /theme
extraVolumeMounts: |
- name: theme
mountPath: /opt/jboss/keycloak/standalone/deployments
extraVolumes: |
- name: theme
emptyDir: {}
Run with:
helm install keycloak codecentric/keycloak --values custom-themes-values.yml
ps: This example the theme was publish into maven repository, but you can copy a local file to.
This way you can adapt to magic-link.

Related

Falco not writing to logs about a file being edited

I have deployed falco as a side car to my work load in EKS/Fargate. Falco is able to execute the script that is defined in the image but not able to monitor workload container in runtime, meaning if I create a file at a location defined in the rules its not writing anything to logs about the new file. Details below
created falco image using https://github.com/aws-samples/aws-fargate-falco-examples/blob/main/containerimages/sidecarptrace/Dockerfile.workload
Use this falco image to deploy it as sidecar along with our workload containerimages/sidecarptrace/Dockerfile
In workload image, we are triggering falco using below
##############################################################################################
COPY --from=falcosecurity/falco-userspace:latest /vendor/falco/bin/pdig /vendor/falco/bin/pdig
COPY ./myscript.sh /vendor/falco/scripts/myscript.sh
RUN chown -R nginx:nginx /vendor/falco/scripts/myscript.sh
RUN chmod 755 /vendor/falco/scripts/myscript.sh
CMD ["/vendor/falco/bin/pdig", "/bin/bash", "/vendor/falco/scripts/myscript.sh"]
##################################################################################################
For better understanding, below is how pod.yaml looks like
containers:
- name: falco
image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxxx:sidecarptracefalco
volumeMounts:
- name: falco-config
mountPath: "/data/falco.yaml"
subPath: "falco.yaml"
readOnly: true
- name: falco-local-rules
mountPath: "/data/falco_rules.local.yaml"
subPath: "falco_rules.local.yaml"
readOnly: true
- name: nginx
image: 11111111111111.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxxxxxxx:bfde44a3
Deployed falco.yaml and falco-local-rules.yaml from https://github.com/aws-samples/aws-fargate-falco-examples/tree/main/podspecs
What I am noticing is
myscript.sh runs correctly and only those are being logged in falco container logs.
If I create a shell script under /etc/nginx/html and execute it using sh /etc/nginx/html/test.sh, nothing is logged
What I want is
Falco to continously monitor workload container and log it
If the CMD in workload image needs an update to implement the continous monitoring, need guidance of how to do that
I am expecting any file creation or editing a file should be logged by falco

Is it possible to bind a workspace from within a Pipeline (and not a PipelineRun) or create a PipelineRun with bound workspaces using GUI?

I'd like to be able to start my pipeline which uses two workspaces using the Tekton dashboard (GUI) but from what I can see, there is no option to provide workspaces here.
The only two options of creating a PipelineRun with bound workspaces I can think of are to either:
Create it programmatically and apply using either kubectl or tekton-cli
Create a TriggerTemplate with bound workspaces and run the pipeline by a webhook to an EventListener
My main problem is both of those options require the developer to go through a very non-user-friendly process. Ideally, I'd like to run the pipelines with bound workspaces from tekton GUI. Can this be somehow achieved?
I've tried providing the binding in the workspaces section of the Pipeline definition as below:
workspaces:
- name: source-dir
persistentVolumeClaim:
claimName: gradle-cache-pvc
- name: ssh-creds
secret:
secretName: ssh-key-secret

How can I use Image PipelineResource for input in tekton task

As documented in https://github.com/tektoncd/pipeline/blob/master/docs/resources.md I have configured an Image PipelineResource:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: my-data-image
spec:
type: image
params:
- name: url
value: image-registry.openshift-image-registry.svc:5000/default/my-data
Now when I am using the above PipelineResource as an input to a task:
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: my-task
spec:
inputs:
resources:
- name: my-data-image
type: image
steps:
- name: print-info
image: image-registry.openshift-image-registry.svc:5000/default/my-task-runner-image:latest
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- "-c"
- >
echo "List the contents of the input image" &&
ls -R "$(inputs.resources.my-data-image.path)"
I am not able to list the content of the image, as I get the error
[test : print-info] List the contents of the input image
[test : print-info] ls: cannot access '/workspace/my-data-image': No such file or directory
The documentation (https://github.com/tektoncd/pipeline/blob/master/docs/resources.md) states that an Image PipelineResource is usually used as a Task output for Tasks that build images.
How can I access the contents of my container data image from within the tekton task?
Currently Tekton does not support Image inputs in the way that OpenShift's build configs support them: https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs
Image inputs are only useful for variable interpolation, for example, "$(inputs.resources.my-image.url)" while `ls "$(inputs.resources.my-image.path)" will always print empty content.
There are several ways to access the contents of the Image though including:
Export image to tar: podman export $(podman create $(inputs.resources.my-image.url) --tls-verify=false) > contents.tar
Copy files from the image: docker cp $(docker create $(inputs.resources.my-image.url)):/my/image/files ./local/copy. The tool skopeo can also copy files however does not seem to offer sub-directory copy capabilities.
Copy a pod directory to a local directory (https://docs.openshift.com/container-platform/4.2/nodes/containers/nodes-containers-copying-files.html): oc rsync $(inputs.resources.my-image.url):/src /home/user/source
Having said the above, I decided to simply use the OpenShift's built-in BuildConfig resources to create a chained build for my pipeline. The variety of Build strategies supported by OpenShift out-of-the box is sufficient for my pipeline scenarios and the fact that Image inputs are supported makes it much easier when comparing to Tekton pipelines (https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs). The only advantage that Tekton pipelines seem to have is the ability to easily reuse tasks, however, the equivalent can be achieved by creating Operators for OpenShift resources.

Drone.IO - Using GitLab in SSH mode

We would like to disable http access on our GitLab instance and use SSH only. Can drone somehow communicate with GitLab over SSH?
The default clone plugin uses git+https to clone repositories. If you would like to change the default behavior and use git+ssh, you will have to create a custom clone plugin.
clone:
custom:
image: amazia/custom-git-plugin
pipeline:
build:
image: golang
commands:
- go build
- go test
The above example demonstrates a yaml configuration that overrides the default clone step to use a custom plugin. Here are some resources for creating custom plugins:
http://docs.drone.io/creating-custom-plugins-bash/
http://docs.drone.io/creating-custom-plugins-golang/

docker-compose override application properties

Having a Spring Boot application we are using application.yml file to store properties. I got a task to give a user a possibility to override some properties while starting an application. Taking into consideration we have dockerised our app docker-compose file is the very right place I believe for that. I found one option which works actually, env_file:
backend:
build:
context: backend
dockerfile: Dockerfile.backend
restart: always
ports:
- 3000:3000
env_file:
- backend/custom.env
volumes:
- ../m2_repo:/root/.m2/
- ../{APP_NAME}/data_sources:/backend/data_sources/
links:
- database
networks:
main:
aliases:
- backend
This solves perfectly my task and all the KEY=VALUE pairs override existing in application.yml properties. However, I have 2 questions:
It appeared that having multiple services in my docker-compose file I need specify a separate env_file for each service, which is probably not very convenient. Is there a possibility to have one common env_file for the whole docker-compose file?
I know that for docker-compose run command there is an option -e where i can put key=value pairs of env variables. Is there any similar option for docker-compose up? I mean in order not to use env_file at all.
Ad 1: It is not possible. I also believe it is intentional - to make the developer define what container has access to what .env data.
Ad 2: No, you cannot supply the variables using a runtime parameter of up command of docker-compose (run docker-compose help up to see the available runtime params). But you can define these using environment clause from within a compose file, like:
restart: always
ports:
- 3000:3000
env_file:
- backend/custom.env
environment:
- DB_PASSWORD # <= #1
- APP_ENV=production # <= #2
ie.
either just a name of the env var - its value is then taken from the host machine
or the whole definition to create a new one to be available within a container
See docs on environment clause for more clarification.
Another thing you can do in order to override some settings is to extend the compose file using a "parent" one. Docs on extends clause
Unfortunately as of now, extends won't work when using compose file of version 3, but it is being discussed in this github issue, so hopefully it will be available soon:)