EKS pod not starting - amazon-eks

All,
I created neo4j cluster with helm template. First I created with helm install commad and it worked fine. created pods with stateful sets by using default gp2 volume.
But when i use Kustomize to tweak helm template and install pods are not starting and giving me below error. Even though it is creating storageClass and volumeclaim.
warning on the pods "backoff: restarting" when i looked at the events , i see the error" Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[kube-api-access-8mrwb init-script datadir plugins]: timed out waiting for the condition"
Below is my Kusomization.yaml file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: neo4j-helm-master
releaseName: ntd
# Overriding values from charts/neo4j/values.yaml
valuesInline:
#name: neo4jcluster
acceptLicenseAgreement: "yes"
Any help is appreciated.

Related

Falco not writing to logs about a file being edited

I have deployed falco as a side car to my work load in EKS/Fargate. Falco is able to execute the script that is defined in the image but not able to monitor workload container in runtime, meaning if I create a file at a location defined in the rules its not writing anything to logs about the new file. Details below
created falco image using https://github.com/aws-samples/aws-fargate-falco-examples/blob/main/containerimages/sidecarptrace/Dockerfile.workload
Use this falco image to deploy it as sidecar along with our workload containerimages/sidecarptrace/Dockerfile
In workload image, we are triggering falco using below
##############################################################################################
COPY --from=falcosecurity/falco-userspace:latest /vendor/falco/bin/pdig /vendor/falco/bin/pdig
COPY ./myscript.sh /vendor/falco/scripts/myscript.sh
RUN chown -R nginx:nginx /vendor/falco/scripts/myscript.sh
RUN chmod 755 /vendor/falco/scripts/myscript.sh
CMD ["/vendor/falco/bin/pdig", "/bin/bash", "/vendor/falco/scripts/myscript.sh"]
##################################################################################################
For better understanding, below is how pod.yaml looks like
containers:
- name: falco
image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxxx:sidecarptracefalco
volumeMounts:
- name: falco-config
mountPath: "/data/falco.yaml"
subPath: "falco.yaml"
readOnly: true
- name: falco-local-rules
mountPath: "/data/falco_rules.local.yaml"
subPath: "falco_rules.local.yaml"
readOnly: true
- name: nginx
image: 11111111111111.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxxxxxxx:bfde44a3
Deployed falco.yaml and falco-local-rules.yaml from https://github.com/aws-samples/aws-fargate-falco-examples/tree/main/podspecs
What I am noticing is
myscript.sh runs correctly and only those are being logged in falco container logs.
If I create a shell script under /etc/nginx/html and execute it using sh /etc/nginx/html/test.sh, nothing is logged
What I want is
Falco to continously monitor workload container and log it
If the CMD in workload image needs an update to implement the continous monitoring, need guidance of how to do that
I am expecting any file creation or editing a file should be logged by falco

Is it possible to bind a workspace from within a Pipeline (and not a PipelineRun) or create a PipelineRun with bound workspaces using GUI?

I'd like to be able to start my pipeline which uses two workspaces using the Tekton dashboard (GUI) but from what I can see, there is no option to provide workspaces here.
The only two options of creating a PipelineRun with bound workspaces I can think of are to either:
Create it programmatically and apply using either kubectl or tekton-cli
Create a TriggerTemplate with bound workspaces and run the pipeline by a webhook to an EventListener
My main problem is both of those options require the developer to go through a very non-user-friendly process. Ideally, I'd like to run the pipelines with bound workspaces from tekton GUI. Can this be somehow achieved?
I've tried providing the binding in the workspaces section of the Pipeline definition as below:
workspaces:
- name: source-dir
persistentVolumeClaim:
claimName: gradle-cache-pvc
- name: ssh-creds
secret:
secretName: ssh-key-secret

Tekton task fails with `more than one PersistentVolumeClaim is bound`

I'm trying to run a Tekton pipeline with a task that needs to access multiple PersistentVolumeClaim workspaces. When I run the pipeline, the task fails with the message "more than one PersistentVolumeClaim is bound". As far as I can tell, there's nothing that forbids having more than one PersistentVolumeClaim bound in the same task, so why am I getting this error and how can I fix it?
Have you tried to disable Tekton affinity assistant?
$ kubectl edit configmap feature-flags -n tekton-pipelines
Look for disable-affinity-assistant. Change its value to true.
See:
https://github.com/tektoncd/pipeline/issues/3480
https://github.com/tektoncd/pipeline/issues/3085
Also: make sure your Tekton stack is relatively up to date, as there may have been some regression (unconfirmed) in 0.14.3.

Set up new extensions in Keycloak kubernetes helm charts

I have a Kubernetes cluster on Azure, where I use Helm to make it easier to manage micro-services and other tools on it, and Keycloak is one of them.
I need to use magic link authenticator in one of my apps, I'm aware that I need to add an extension in my Keycloak chart, but I don't know how.
In the image repository I'm using, they explain how to add custom themes, via extraInitContainers param on chart configuration. I think I can achieve what I want through it.
In this tutorial they say that's the extension, but I have no idea how to add this to my Keycloak instance on k8s by using helm charts. How do I achieve that?
Just more info about my config, I'm running louketo-proxy(as a side car) on some apps where I want to protect.
To publish a theme with original image, first create an archive with the thema.
Create a file custom-themes-values.yml with a content:
extraInitContainers: |
- name: theme-provider
image: busybox
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
echo "wgetting theme from maven..."
wget -O /theme/keycloak-theme.jar https://repo1.maven.org/maven2/org/acme/keycloak-theme/1.0.0/keycloak-theme-1.0.0.jar
volumeMounts:
- name: theme
mountPath: /theme
extraVolumeMounts: |
- name: theme
mountPath: /opt/jboss/keycloak/standalone/deployments
extraVolumes: |
- name: theme
emptyDir: {}
Run with:
helm install keycloak codecentric/keycloak --values custom-themes-values.yml
ps: This example the theme was publish into maven repository, but you can copy a local file to.
This way you can adapt to magic-link.

Debugging CrashLoopBackOff for an image running as root in openshift origin

I wanted to start an ubuntu container on a open shift origin. I have my local registry and pulling from it is successful. The container starts but immediately throws CrashLoopBackOff and stops. The ubuntu image that I have runs as root
Started container with docker id 28250a528e69
Created container with docker id 28250a528e69
Successfully pulled image "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
pulling image "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
Error syncing pod, skipping: failed to "StartContainer" for "ubuntu" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=ubuntu pod=ubuntu-2-suy6p_testproject(69af5cd9-5dff-11e6-940e-0800277bbed5)"
The container runs with restricted privilege. I dont know how to start the pod with a privileged mode, so edited my restricted mode as follows so that my image with root access will run
> NAME PRIV CAPS SELINUX RUNASUSER FSGROUP
> SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted true
> [] RunAsAny RunAsAny RunAsAny RunAsAny <none>
> false [configMap downwardAPI emptyDir persistentVolumeClaim
> secret]
But still I couldnt successfully start my container ?
There are two commands that helpful for crashloopbackoff debugging.
oc debug pod/your-pod-name will create a very similar pod and exec into it. You can look at the different options for launching it, some deal with SCC options. You can also use dc, rc, is, most things that can stamp out pods.
oc logs -p pod/your-pod-name will retrieve the logs from the last run of the pod, which may have useful information too.