The solution to deploy the OMS Agent into a Kubernetes (OpenShift) cluster is creating a Replica Set and Daemon Set in the kube-system namespace. I am using the Helm method to deploy and specifying the -n parameter gets ignored.
I suppose it's not a massive problem but I would like to keep thing tidy and easily identifiable namespaces as we are a large team.
https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-hybrid-setup
OMS agent is the managed component, it should be part of kube-system namespace. If moved to any namespace there will be issues for data collection and monitoring.This is by design at the moment because we use the same agent for AKS and non-AKS.
Related
I am a newbie of GKE. I created a GKE cluster with very simple setup. It only has on gpu node and all other stuff was default. After the cluster is up, I was able to list the nodes and ssh into the nodes. But I have two questions here.
I tried to install nvidia driver using the command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
It output that:
kubectl apply --filename https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
daemonset.apps/nvidia-driver-installer configured
But 'nvidia-smi' cannot be found at all. Should I do something else to make it work?
On the worker node, there wasn't the .kube directory and the file 'config'. I had to copy it from the master node to the worker node to make things work. And the config file on the master node automatically updates so I have to copy again and again. Did I miss some steps in the creation of the cluster or how to resolve this problem?
I appreciate someone can shed some light on this. It drove me crazy after working on it for several days.
Tons of thanks.
Alex.
For the DaemonSet to work, you need to have a tag on your worker Node as cloud.google.com/gke-accelerator (see this line). The DaemonSet checks for this tag on a node before scheduling any pods for installing the driver. I'm guessing a default node pool you create did not have this tag on it. You can find more details on this on the GKE docs here.
The worker nodes, by design are just that worker nodes. They do not need privileged access to the Kubernetes API so they don't need any kubeconfig files. The communication between worker nodes and the API is strictly controlled through the kubelet binary running on the node. Therefore, you will never find kubeconfig files on a worker node. Also, you should never put them on the worker node either, since if a node gets compromised, the keys in that file can be used to damage the API Server. Instead, you should make it a habit to either use the master nodes for kubectl commands, or better yet, have the kubeconfig on your local machine, and keep it safe, and issue commands remotely to your cluster.
After all, all you need is access to an API endpoint for your Kubernetes API server, and it shouldn't matter where you access it from, as long as the endpoint is reachable. So, there is no need whatsoever to have kubeconfig on the worker nodes :)
Kubernetes namespaces are configured before Spinnaker is even deployed, so Spinnaker should be able to deploy into them in a namespace-restricted enterprise environment. But this answer says Spinnaker will not run in that setting: Spinnaker with restricted namspace access
Why does Spinnaker require read access to namespaces when those names are already known to it?
Why does the error response contain the name of the namespace that it is trying to list?
I forked halyard so that it uses client.pods().list() to verify the k8 connection and it is able to deploy Spinnaker. Spinnaker seems to work as long as it takes namespace names from its cache. When it uses live-manifest-calls or refreshes its cache, namespace pulldowns stop working.
You don't need it actulally. Just proper configuration for Halyard and Spinnaker.
See instruction.
Configure Spinnaker to install in Kubernetes
Important: This will by default limit your Spinnaker to deploying to the namespace specified. If you want to be able to deploy to other namespaces, either add a second cloud provider target or remove the --namespaces flag.
Use the Halyard hal command line tool to configure Halyard to install Spinnaker in your Kubernetes cluster
hal config deploy edit \
--type distributed \
--account-name ${ACCOUNT_NAME} \
--location ${NAMESPACE}
Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.
By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/
For end to end devops automation I want to have an environment on demand. For this I need to Spun up and environment on kubernetes which is eventually hosted on GCP.
My Use case
1. Developer Checks in the code in feature branch
2. Environment in Spun up on Google Cloud with Kubernetes
3. Application gets deployed on Kubernetes
4. Gets tested and then the environment gets destroyed.
I am able to do everything with Spinnaker except #2. i.e create Kube Cluster on GCP using Spinnaker.
Any help please
Thanks,
Amol
I'm not sure Spinnaker was meant for doing what the second point in your list. Spinnaker assumes a collection of resources (VM's or a Kubernetes cluster) and then works with that. So instead of spinning up a new GKE cluster Spinnaker makes use of existing clusters. I think it'd be better (for you costs as well ;) if you seperate the environments using Kubernetes namespaces.