I'm recently started working a bit with openshift and it looks promising so far, but I keep running into issues and mostly finding outdated documentation or look at the completely wrong place.
For example, I have currently an openshift installation of ~150 cores, based on a couple of servers and some of these nodes have only 4 cores and others have 48.
I would like to modify all my nodes to have pods = 1.5 * cores or so.
Is this possible?
I tried to use:
oc edit node node0
and change pods from the default 40 to say 6, but sadly oc never saves my values and always resets itself back to the default of 40.
kind regards
my openshift information:
oc v1.0.7-2-gd775557-dirty
kubernetes v1.2.0-alpha.1-1107-g4c8e6f4
installation done using ansible, single master, external dns.
Max pods per node is set on the node - you can add in the stanza to the node config YAML file to set it:
kubeletArguments:
max-pods:
- "100"
The string is important - this stanza passes arguments directly to the Kubelet invocation (so any arg you can pass to a Kubelet you can pass via this config)
Related
I follow all the instruction from the Microk8s registry page, but when I try to obtain the image from my Helm chart (allocated in other virtual machine), it returns an ImagePullBackOff.
I've inserted in my virtual machine the insecure-registries: 192.168.56.11:32000 and the command docker pull 192.168.56.11:32000/image:registry works fine.
My helm chart values.yaml file looks like this:
image:
repository: 192.168.56.11:32000/image
pullPolicy: Always
tag: "registry"
Well, my case was significantly different and special. I was using two VMs: one with OSM (Open Source Mano) and another with microk8s. Despite I configure my helm charts in the OSM machine, it is in the microk8s machine where it pull the image, so I had to put localhost:32000/image:registry in the helm chart.
This is an special issue due to the use of OSM and microk8s in different machines.
I hope this will be helpfull to someone.
Verify Image
Taking the URL in error, e.g. http://192.168.56.11:32000/v2/vnf-image/manifests/registry, access via Web Browser to make sure the image is correct, e.g. no issue of "manifest unknown"
Configure microk8s properly
Following https://microk8s.io/docs/registry-private?
can take snap info microk8s to check the version
Recently have tried to deploy redis-cluster on kubernetes cluster using helm chart. I am following below links--
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
For helm deployment have used values-production.yaml. the default deployment went successful and able to create three node redis cluster (three master and three slave).
I am checking on two things currently:
How to enable container logs, as per the official docs, it should be written in "/opt/bitnami/redis/logs" but haven't seen any logs here.
From the official docs got to know, that in redis.conf log file name should be mention but currently it is "" Empty string, not sure how to and where to pass log file so that it should come in redis.conf.
I have tried to enable tls as well.. Have generated the certificates mentioned as per the redis.io/tls official docs. After than I have created the secret key mentioned in bitnami/tls section and passed the certificates in secret key.
Then I have passed the secret key name and all the certificates in values-production.yaml, then deployed the helm chart and it was giving me permission denied error msg.. For libfile.Sh in line number 37...
When I have checked the pod status, out of 6 pods three pods are in running 2/2 state and 3 pods in 1/2 crash loopback off state.
After logging on running pod able to verify that certificates got placed at location "/opt/bitnami/redis/certs/", and changes also got reflected in redis.conf file for the certificates...
Pls let me know how to make any configuration changes in redis.conf file using bitnami redis helm chart and how to resolve above two issue??
My understanding is for any redis.conf related changes, I have to pass values in values-production.yaml file... Pls let me know on this..thank you.
Bitnami developer here
My first recommendation for you is to open an issue at https://github.com/bitnami/charts/issues if you are struggling with the Redis Cluster chart.
Regarding the logs, as it's mentioned at https://github.com/bitnami/bitnami-docker-redis-cluster#logging:
The Bitnami Redis-Cluster Docker image sends the container logs to stdout
Therefore, you can simply access the logs by running (substitute "POD_NAME" with the actual name of any of you Redis pods):
kubectl logs POD_NAME
Finally, with respect to the TLS configuration, I guess you're following this guide right?
I have created new app on OpenShift using this image: https://hub.docker.com/r/luiscoms/openshift-rabbitmq/
It runs successfully and I can use it. I have added a persistent volume to it.
However, every time a POD is restarted, I loos all my data. This is because RabbitMq uses a hostname to create database directory.
For example:
node : rabbit#openshift-rabbitmq-11-9b6p7
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : BsUC9W6z5M26164xPxUTkA==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbit#openshift-rabbitmq-11-9b6p7
How can I set RabbitMq to always use same database dir?
You should be able to set an environment variable RABBITMQ_MNESIA_DIR to override the default configuration. This can be done via the OpenShift console by add an entry to environment in the deployment config or via the oc tool, for example:
oc set env dc/my-rabbit RABBITMQ_MNESIA_DIR=/myDir
You would then need to mount the persistent volume inside the Pod at the required path. Since you have said it is already created, then you just need to update it, example:
oc volume dc/my-rabbit --add --overwrite --name=my-pv-name --mount-path=/myDir
You will need to make sure you have correct r/w access on the provided mount path
EDIT: Some additional workarounds based on issues in comments
The issues caused by the dynamic hostname could be solved in a number of ways:
1.(Preferred IMO) Move the deployment to a StatefulSet. StatefulSet will provide stability in the naming and hence network identifier of the Pod, which must be fronted by a headless service. This feature is out of beta as of Kubernetes 1.9 and tech preview in OpenShift since version 3.5
Set the hostname for the Pod if Statefulsets are not an option. This can be done by adding the environment variable oc set env dc/example HOSTNAME=example to make the hostname static and setting RABBITMQ_NODENAME to do likewise.
I was able to get it to work by setting the HOSTNAME environment variable. OSE normally sets that value to the pod name, so it changes everytime the pod restarts. By setting it the pod's hostname doesn't change when the pod restarts.
Combined with a Persistent Volume the the queues, messages users and i assume whatever other configuration is persisted through pod restarts.
This was done on an OSE 3.2 server. I just added an environment variable to the deployment config. You can do it through the UI or with the OC CLI:
oc set env dc/my-rabbit HOSTNAME=some-static-name
This will probably be an issue if you run multiple pods for the service, but in that case you would need to setup proper RabbitMq clustering, which is a whole different beast.
The easiest and production-safest way to run RabbitMQ on K8s including OpenShift is the RabbitMQ Cluster Operator.
See this video on how to deploy RabbitMQ on OpenShift.
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.
We are testing openshift origin (Latest) for a poc. We have researched POD scaling and it has worked for us very well. This talks about adding nodes and can be done with ansible/puppet but how does one completely automate this.
Using openshift how does one achieve -
1. Create new ec2 node when current nodes are at capacity.
( eg a POD which is guaranteed certain resources cannot be created )
2. Add this node to current cluster
3. Scale PODs to this node