Falco not writing to logs about a file being edited - falco

I have deployed falco as a side car to my work load in EKS/Fargate. Falco is able to execute the script that is defined in the image but not able to monitor workload container in runtime, meaning if I create a file at a location defined in the rules its not writing anything to logs about the new file. Details below
created falco image using https://github.com/aws-samples/aws-fargate-falco-examples/blob/main/containerimages/sidecarptrace/Dockerfile.workload
Use this falco image to deploy it as sidecar along with our workload containerimages/sidecarptrace/Dockerfile
In workload image, we are triggering falco using below
##############################################################################################
COPY --from=falcosecurity/falco-userspace:latest /vendor/falco/bin/pdig /vendor/falco/bin/pdig
COPY ./myscript.sh /vendor/falco/scripts/myscript.sh
RUN chown -R nginx:nginx /vendor/falco/scripts/myscript.sh
RUN chmod 755 /vendor/falco/scripts/myscript.sh
CMD ["/vendor/falco/bin/pdig", "/bin/bash", "/vendor/falco/scripts/myscript.sh"]
##################################################################################################
For better understanding, below is how pod.yaml looks like
containers:
- name: falco
image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxxx:sidecarptracefalco
volumeMounts:
- name: falco-config
mountPath: "/data/falco.yaml"
subPath: "falco.yaml"
readOnly: true
- name: falco-local-rules
mountPath: "/data/falco_rules.local.yaml"
subPath: "falco_rules.local.yaml"
readOnly: true
- name: nginx
image: 11111111111111.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxxxxxxx:bfde44a3
Deployed falco.yaml and falco-local-rules.yaml from https://github.com/aws-samples/aws-fargate-falco-examples/tree/main/podspecs
What I am noticing is
myscript.sh runs correctly and only those are being logged in falco container logs.
If I create a shell script under /etc/nginx/html and execute it using sh /etc/nginx/html/test.sh, nothing is logged
What I want is
Falco to continously monitor workload container and log it
If the CMD in workload image needs an update to implement the continous monitoring, need guidance of how to do that
I am expecting any file creation or editing a file should be logged by falco

Related

EKS pod not starting

All,
I created neo4j cluster with helm template. First I created with helm install commad and it worked fine. created pods with stateful sets by using default gp2 volume.
But when i use Kustomize to tweak helm template and install pods are not starting and giving me below error. Even though it is creating storageClass and volumeclaim.
warning on the pods "backoff: restarting" when i looked at the events , i see the error" Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[kube-api-access-8mrwb init-script datadir plugins]: timed out waiting for the condition"
Below is my Kusomization.yaml file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: neo4j-helm-master
releaseName: ntd
# Overriding values from charts/neo4j/values.yaml
valuesInline:
#name: neo4jcluster
acceptLicenseAgreement: "yes"
Any help is appreciated.

Passing variables between Tekton Steps

I am trying to implement a basic Tekton CI pipeline. All the pipeline does is 1) get the source code 2) build an image with a new version and push it to an image registry.
The image version is generated by a Tekton Step. Images are built by another Tekton step that uses Kaniko as described here.
I am aware of using workspaces to pass variables between Tekton steps. Meaning I can write the version to a file in the workspace. But cant figure out a way to read this version from the file in the Kaniko build step below:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url):<IMAGE-VERSION-NEEDED-HERE>
- --context=$(params.pathToContext)
- --build-arg=BASE=alpine:3
There should be a common pattern to resolve this but I am not sure if I am looking at the right places in Tekton documentation for this.
Can anyone offer some pointers?
This is to confirm that I managed to resolve the issue by redesigning the steps to tasks as suggested by #Jonas.
Tekton Tasks can have outputs which can be referred in other tasks. At the time of writing this Tekton steps don't seem to have this feature.
For more details refer the links in #Jonas comments above.
All steps in a Task share the same Pod and thus as access to a shared workspace implemented as an emptyDir volume:
Volumes:
tekton-internal-workspace:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
A common way to share data between steps is to a file in the /workspace and read it in the next step.
Alternatively, as suggested by #Jonas, if you use different Tasks you can write a result in the first Task and feed it into a parameter of the second Task in the Pipeline definition.
Using results this way implicitly creates a dependency between the two Tasks, so the Tekton controller won't schedule the second Task until the first one has successfully completed and results are available.
You can use the gcr.io/kaniko-project/executor:debug image that has shell at /busybox/sh.
And create something like this (pass kaniko commands via script):
steps:
- name: write-to-workspace
image: ubuntu
script: |
#!/usr/bin/env bash
echo "IMAGE_VERSION" > /workspace/FOO
- name: read-from-workspace
image: gcr.io/kaniko-project/executor:debug
script: |
#!/busybox/sh
export IMAGE_VERSION=$(cat /workspace/FOO)
echo "$IMAGE_VERSION"
/kaniko/executor \
--dockerfile=$(params.pathToDockerFile) \
--destination=$(resources.outputs.builtImage.url):"${IMAGE_VERSION}" \
--context=$(params.pathToContext) \
--build-arg=BASE=alpine:3
You can refer to this discussion: https://github.com/tektoncd/pipeline/issues/1476

Set up new extensions in Keycloak kubernetes helm charts

I have a Kubernetes cluster on Azure, where I use Helm to make it easier to manage micro-services and other tools on it, and Keycloak is one of them.
I need to use magic link authenticator in one of my apps, I'm aware that I need to add an extension in my Keycloak chart, but I don't know how.
In the image repository I'm using, they explain how to add custom themes, via extraInitContainers param on chart configuration. I think I can achieve what I want through it.
In this tutorial they say that's the extension, but I have no idea how to add this to my Keycloak instance on k8s by using helm charts. How do I achieve that?
Just more info about my config, I'm running louketo-proxy(as a side car) on some apps where I want to protect.
To publish a theme with original image, first create an archive with the thema.
Create a file custom-themes-values.yml with a content:
extraInitContainers: |
- name: theme-provider
image: busybox
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
echo "wgetting theme from maven..."
wget -O /theme/keycloak-theme.jar https://repo1.maven.org/maven2/org/acme/keycloak-theme/1.0.0/keycloak-theme-1.0.0.jar
volumeMounts:
- name: theme
mountPath: /theme
extraVolumeMounts: |
- name: theme
mountPath: /opt/jboss/keycloak/standalone/deployments
extraVolumes: |
- name: theme
emptyDir: {}
Run with:
helm install keycloak codecentric/keycloak --values custom-themes-values.yml
ps: This example the theme was publish into maven repository, but you can copy a local file to.
This way you can adapt to magic-link.

How can I use Image PipelineResource for input in tekton task

As documented in https://github.com/tektoncd/pipeline/blob/master/docs/resources.md I have configured an Image PipelineResource:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: my-data-image
spec:
type: image
params:
- name: url
value: image-registry.openshift-image-registry.svc:5000/default/my-data
Now when I am using the above PipelineResource as an input to a task:
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: my-task
spec:
inputs:
resources:
- name: my-data-image
type: image
steps:
- name: print-info
image: image-registry.openshift-image-registry.svc:5000/default/my-task-runner-image:latest
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- "-c"
- >
echo "List the contents of the input image" &&
ls -R "$(inputs.resources.my-data-image.path)"
I am not able to list the content of the image, as I get the error
[test : print-info] List the contents of the input image
[test : print-info] ls: cannot access '/workspace/my-data-image': No such file or directory
The documentation (https://github.com/tektoncd/pipeline/blob/master/docs/resources.md) states that an Image PipelineResource is usually used as a Task output for Tasks that build images.
How can I access the contents of my container data image from within the tekton task?
Currently Tekton does not support Image inputs in the way that OpenShift's build configs support them: https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs
Image inputs are only useful for variable interpolation, for example, "$(inputs.resources.my-image.url)" while `ls "$(inputs.resources.my-image.path)" will always print empty content.
There are several ways to access the contents of the Image though including:
Export image to tar: podman export $(podman create $(inputs.resources.my-image.url) --tls-verify=false) > contents.tar
Copy files from the image: docker cp $(docker create $(inputs.resources.my-image.url)):/my/image/files ./local/copy. The tool skopeo can also copy files however does not seem to offer sub-directory copy capabilities.
Copy a pod directory to a local directory (https://docs.openshift.com/container-platform/4.2/nodes/containers/nodes-containers-copying-files.html): oc rsync $(inputs.resources.my-image.url):/src /home/user/source
Having said the above, I decided to simply use the OpenShift's built-in BuildConfig resources to create a chained build for my pipeline. The variety of Build strategies supported by OpenShift out-of-the box is sufficient for my pipeline scenarios and the fact that Image inputs are supported makes it much easier when comparing to Tekton pipelines (https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs). The only advantage that Tekton pipelines seem to have is the ability to easily reuse tasks, however, the equivalent can be achieved by creating Operators for OpenShift resources.

Debugging CrashLoopBackOff for an image running as root in openshift origin

I wanted to start an ubuntu container on a open shift origin. I have my local registry and pulling from it is successful. The container starts but immediately throws CrashLoopBackOff and stops. The ubuntu image that I have runs as root
Started container with docker id 28250a528e69
Created container with docker id 28250a528e69
Successfully pulled image "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
pulling image "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
Error syncing pod, skipping: failed to "StartContainer" for "ubuntu" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=ubuntu pod=ubuntu-2-suy6p_testproject(69af5cd9-5dff-11e6-940e-0800277bbed5)"
The container runs with restricted privilege. I dont know how to start the pod with a privileged mode, so edited my restricted mode as follows so that my image with root access will run
> NAME PRIV CAPS SELINUX RUNASUSER FSGROUP
> SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted true
> [] RunAsAny RunAsAny RunAsAny RunAsAny <none>
> false [configMap downwardAPI emptyDir persistentVolumeClaim
> secret]
But still I couldnt successfully start my container ?
There are two commands that helpful for crashloopbackoff debugging.
oc debug pod/your-pod-name will create a very similar pod and exec into it. You can look at the different options for launching it, some deal with SCC options. You can also use dc, rc, is, most things that can stamp out pods.
oc logs -p pod/your-pod-name will retrieve the logs from the last run of the pod, which may have useful information too.