How to setup Kubernetes (v1.9.5) Audit based on coreos (1688.5.3) - audit

Team,
Any special requirements to setup kubernetes(v1.9.5) audit following the doc based on CoreOS 1688.5.3? (kubespray “v2.5.0” )
I can see nothing generated here /var/log/kube-audit, which should be, any tips? Thanks!
```
$ sudo vi /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
$ sudo vi kube-apiserver
...
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-path=/var/log/kube-audit
--audit-log-format=json
```

Well, it's in the pod kube-apiserver:/var/log/kube-audit

Related

Kubernetes rolling update with updating value in deployment file

I wanted to share a solution I did with kubernetes and have your opinion on best practice to do in such case. I'm still new to kubernetes.
I had a problem I wanted to be able to update my application by restarting my deployment pod that execute all the necessary action to do that already in command start.
I'm using microk8s and I wanted to just go to the good folder and execute microk8s kubectl apply -f myfilename and let kubernetes handle the rest with rolling update.
My issue was how to set dynamic value inside my .yaml file so the command would detect the change and start the process.
I've planned to do a bash script that do the job like the following:
file="my-file-deployment.yaml"
oldstr=`grep 'my' $file | xargs`
timestamp="$(date +"%Y-%m-%d-%H:%M:%S")"
newstr="value: my-version-$timestamp"
sed -i "s/$oldstr/$newstr/g" $file
echo "old version : $oldstr"
echo "Replaced String : $newstr"
sudo microk8s kubectl apply -f $file
on my deployment.yaml file I'm giving the following env:
env:
- name: version
value: my-version-2022-09-27-00:57:15
I'm switching with timestamp to a new value then I launch the command:
microk8s kubectl apply -f myfilename
it is working great for the moment. I still have to configure startupProbe to have a better rolling update execution because I'm having few second downtime which isn't cool.
Is there a better solution to work with rolling update using microk8s?
If you are trying to trigger a rolling update on your deployment (assuming it is a deployment), you can patch the deployment and let the cluster handle the rollout. Here's a trick I use and it's literally a one-liner:
kubectl -n {namespace} patch deployment {name-of-your-deployment} \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
This will patch your deployment, adding an annotation to the template block. In this way, the cluster thinks there is a change requiring an update to the deployment's pods, and will cycle them while following the rollingUpdate clause.
The date +'%s' will resolve to a different number each time so every time you run this, it will cause the cluster to cycle the deployment's pods.
We use this trick to force a rolling update when we have done an update that requires our pods to be restarted.
You can accompany this with the rollout status command to wait for the update to complete:
kubectl rollout status deployment/{name-of-your-deployment} -n {namespace}
So a complete line would be something like this if I wanted to rolling update my nginx deployment and wait for it to complete:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
&& kubectl rollout status deployment/nginx -n nginx
One caveat, though. Using kubectl patch does not make changes to the yamls on disk, so if you wanted a copy of the change recorded locally, such as for auditing purposes, similar to what you are doing at the moment, then you could adapt this to do it as a dry-run and redirect output to file:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
--dry-run=client \
-o yaml >patched-nginx.yaml

kubectl versions Error: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1

I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube

How to enable bitnami/redis-cluster logs deployed using helm?

I have done bitnami redis-cluster deployment using helm chart. I have followed below link for redis-cluster deployment-
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
I wanted to enable redis logs, but not sure how to do it, as in the current bitnami redis image configuration file located at /opt/bitnami/redis/etc/redis.conf having parameter value logfile:"" as empty strings.
Please let me is there any ways to enable a redis server logs on each pod??
You can use helm --set flag to overwrite the default values in the redis.conf file.
// Add your custom configurations
$ export CUSTOM_CONFIG="logfile /data/logs/file.log"
// Apply those while installing
$ helm install redis bitnami/redis-cluster --set redis.configmap=$CUSTOM_CONFIG
You can check it from inside the pod:
$ kubectl exec -it redis-redis-cluster-0 -- cat /opt/bitnami/redis/etc/redis.conf
.
.
.
logfile /data/logs/file.log

haproxy pods keep crashloopbackoff

I'm setting up a redis-ha in my kubernetes cluster. And I used helm to install it. But my haproxy pods keep crashloopbackoff
I'm using helm to install a redis-ha in my kubernetes cluster with command: helm install -f develop-redis-values.yaml stable/redis-ha --namespace=develop -n=develop-redis
In develop-redis-values.yaml, I set haproxy.enabled to true
This is the logs in my crashloopbackoff pod
> [ALERT] 268/104750 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:34] : 'tcp-check expect string' expects <string> as an argument.
> [ALERT] 268/104750 (1) : Error(s) found in configuration file : /usr/local/etc/haproxy/haproxy.cfg
> [ALERT] 268/104750 (1) : Fatal errors found in configuration.
I'm expected the haproxy pods is running
CrashLoopBackError can be related to these possible reasons:
the application inside your pod is not starting due to an error;
the image your pod is based on is not present in the registry, or the node where your pod has been scheduled cannot pull from the registry;
some parameters of the pod has not been configured correctly.
In your case, it seems that there are some errors in you haproxy configuration files.
Have you tried to pull the image you're using locally, and start a container to verify it?
You can enter in the container and check the configuration with:
haproxy -c -V -f /usr/local/etc/haproxy/haproxy.cfg
For more information and debugging ways:
https://pillsfromtheweb.blogspot.com/2020/05/troubleshooting-kubernetes.html

Setting up SSL between Helm and Tiller

I am following these instructions to setup SSL between helm and tiller
When I helm-init like this, I get an error
helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
$HELM_HOME has been configured at /Users/Koustubh/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
When I check my pods, I get
tiller-deploy-6444c7d5bb-chfxw 0/1 ContainerCreating 0 2h
and after describing the pod, I get
Warning FailedMount 7m (x73 over 2h) kubelet, gke-myservice-default-pool-0198f291-nrl2 Unable to mount volumes for pod "tiller-deploy-6444c7d5bb-chfxw_kube-system(3ebae1df-e790-11e8-98ae-42010a9800f9)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"tiller-deploy-6444c7d5bb-chfxw". list of unmounted volumes=[tiller-certs]. list of unattached volumes=[tiller-certs default-token-9x886]
Warning FailedMount 1m (x92 over 2h) kubelet, gke-myservice-default-pool-0198f291-nrl2 MountVolume.SetUp failed for volume "tiller-certs" : secrets "tiller-secret" not found
If I try to delete the running tiller pod like this, it just gets stuck
helm reset --debug --force
How can I solve this issue? --upgrade flag with helm init, but that doesn't work either.
I had this issue but resolved it by deleting both the tiller deployment and the service and re-initalising.
I'm also using RBAC so have added those commands too:
# Remove existing tiller:
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
# Re-init with your certs
helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
# Add RBAC service account and role
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
# Re-initialize
helm init --service-account tiller --upgrade
# Test the pod is up
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
tiller-deploy-69775bbbc7-c42wp 1/1 Running 0 5m
# Copy the certs to `~/.helm`
cp tiller.cert.pem ~/.helm/cert.pem
cp tiller.key.pem ~/.helm/key.pem
Validate that helm is only responding via tls
$ helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: cannot connect to Tiller
$ helm version --tls
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Thanks to
https://github.com/helm/helm/issues/4691#issuecomment-430617255
https://medium.com/#pczarkowski/easily-install-uninstall-helm-on-rbac-kubernetes-8c3c0e22d0d7