ImagePullBackoff from a local registry (microk8s) allocated in other virtual machine - repository

I follow all the instruction from the Microk8s registry page, but when I try to obtain the image from my Helm chart (allocated in other virtual machine), it returns an ImagePullBackOff.
I've inserted in my virtual machine the insecure-registries: 192.168.56.11:32000 and the command docker pull 192.168.56.11:32000/image:registry works fine.
My helm chart values.yaml file looks like this:
image:
repository: 192.168.56.11:32000/image
pullPolicy: Always
tag: "registry"

Well, my case was significantly different and special. I was using two VMs: one with OSM (Open Source Mano) and another with microk8s. Despite I configure my helm charts in the OSM machine, it is in the microk8s machine where it pull the image, so I had to put localhost:32000/image:registry in the helm chart.
This is an special issue due to the use of OSM and microk8s in different machines.
I hope this will be helpfull to someone.

Verify Image
Taking the URL in error, e.g. http://192.168.56.11:32000/v2/vnf-image/manifests/registry, access via Web Browser to make sure the image is correct, e.g. no issue of "manifest unknown"
Configure microk8s properly
Following https://microk8s.io/docs/registry-private?
can take snap info microk8s to check the version

Related

Permission denied error for bitnami redis cluster tls connection using helm chart?

Recently have tried to deploy redis-cluster on kubernetes cluster using helm chart. I am following below links--
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
For helm deployment have used values-production.yaml. the default deployment went successful and able to create three node redis cluster (three master and three slave).
I am checking on two things currently:
How to enable container logs, as per the official docs, it should be written in "/opt/bitnami/redis/logs" but haven't seen any logs here.
From the official docs got to know, that in redis.conf log file name should be mention but currently it is "" Empty string, not sure how to and where to pass log file so that it should come in redis.conf.
I have tried to enable tls as well.. Have generated the certificates mentioned as per the redis.io/tls official docs. After than I have created the secret key mentioned in bitnami/tls section and passed the certificates in secret key.
Then I have passed the secret key name and all the certificates in values-production.yaml, then deployed the helm chart and it was giving me permission denied error msg.. For libfile.Sh in line number 37...
When I have checked the pod status, out of 6 pods three pods are in running 2/2 state and 3 pods in 1/2 crash loopback off state.
After logging on running pod able to verify that certificates got placed at location "/opt/bitnami/redis/certs/", and changes also got reflected in redis.conf file for the certificates...
Pls let me know how to make any configuration changes in redis.conf file using bitnami redis helm chart and how to resolve above two issue??
My understanding is for any redis.conf related changes, I have to pass values in values-production.yaml file... Pls let me know on this..thank you.
Bitnami developer here
My first recommendation for you is to open an issue at https://github.com/bitnami/charts/issues if you are struggling with the Redis Cluster chart.
Regarding the logs, as it's mentioned at https://github.com/bitnami/bitnami-docker-redis-cluster#logging:
The Bitnami Redis-Cluster Docker image sends the container logs to stdout
Therefore, you can simply access the logs by running (substitute "POD_NAME" with the actual name of any of you Redis pods):
kubectl logs POD_NAME
Finally, with respect to the TLS configuration, I guess you're following this guide right?

From custom dockerfile to kubernetes deploy with an apache started

I have a dockerfile where I build an apache web server with some custom configurations etc.
Executing the Dockerfile I create an image that could be used in a deployment yaml file using Kubernetes.
Everything is working properly but after deployment, my apache service is down in every container of every pod.
Obviously I can access in every container to execute an /etc/init.d/apache2 start but this solution is not very smart..
So my question is: how can I set my custom apache to be running during the execution of the deploy yaml file?
PS: I tried this solution: with the dockerfile I created a docker container then I accessed on it and I started apache. Then I created a new image from this container (dockerfile commit + gcloud image push) but when I deploy the application I always find apache down
Well, first things first - I would very much recommend just using the official apache2 image and then making your custom configurations from there. They're documentation states this in the following paragraph:
Configuration
To customize the configuration of the httpd server, just COPY your custom configuration in as /usr/local/apache2/conf/httpd.conf.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
However if you're dead-set on building everything yourself; you'll notice that inside of the Dockerfile for the official image they are copying in a BASH script and then setting this as the CMD option. This works because when running a Docker container you should be running a single process; this is why, as you stated, running it from it's service is a bad idea.
You can find the script they're running here, it's very short at 7 lines - so you shouldn't have too much trouble figuring out where to go from here.
Best of luck!

Docker for Win acme.json permissions

Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Vagrant synced file not updating

I have setup a Vagrant box with Ubuntu 12.04 and Apache2 (all very vanilla, as per Vagrant's tutorial). I've been testing it for web development and I stumbled across a weird issue (not sure if bug or feature):
I have setup a synced folder across my machine and the VM folder. Apache has been serving the files mostly well, except (up to now) for a JSON file I'm using.
If I edit it locally, it seemingly syncs it to the VM folder. Both copies are the same.
Although, if I XHR it from the browser after modifying it, I still get the previously served version of that file.
At first, I thought the browser had it cached, but after trying with 2 different browsers (Chrome(ium) and Firefox), after clearing their respective cache, the issue remained.
I finally managed to go around it by reloading (vagrant reload) the VM.
What I was wondering is if this is a bug or a feature and how can I go around it. Is Apache configurable to not cache server side for a specific folder/file/filetype?
vagrant use previous setting until you provision that new setting again, so after every change in vagrant do provision to see reflected output. There is no apache2 cache problem.
For that use command
vagrant reload vmname --provision
if your vm name is default then use
vagrant reload default --provision
it will reboot vagrant vm and apply change to vm .After provision you will be able to see changes.
Finally figured it out. This relates to an issue that occurs with both Apache and /or Nginx: the sendfile option in server configuration.
Basically a new file wasn't being sent/updated client side even when it was changed server-side by Vagrant sync mechanism.
Check this answer for a solution: here.