How do you create an autoscaling GCE server group in Spinnaker? In capacity it just has number of instances and I cannot see anything that mentions autoscaling. This should be possible since GCE instance groups can be told to autoscale.
When you select a GCE server group, the right-hand side details pane has a link to 'Create new autoscaling policy' in the Autoscaling section.
Once you create the policy, you can edit it via the same set of controls.
A configured autoscaler will be inherited when you clone a server group as well.
Thanks,
-Matt
Related
I have ec2 in region A, and eks in region B, eks worker nodes need access a port exposed by ec2, i manually maintaining the ec2 security group which public ip(eks worker ip) can access ec2. The issue is i need update the ec sg manually once scale or upgrade eks node group. there should have a smarter way. i have some ideas, anyone can give some guidance or best practice?
solution 1: use lambda cron job monitor ec2 auto scale group, then update sg.
solution 2: in kubernetes, monitor the node change, use oicd update sg.
note: ec2 and eks in different region.
we have created our Kubernetes cluster with Advanced Networking via the Azure Management UI.
Some time later we've run into the limitation of pods per node described here:
https://learn.microsoft.com/fi-fi/azure/aks/container-service-quotas
We need to change the limitation of 30 pods per node as it is very incomprehensible one for us. Before the advanced networking was possible at all, there was no such limitation and it was also undocumented at the moment, we've created the cluster. Could someone help, how to do change the max pods amount without recreation the whole cluster?
Regards, Gena
You cannot change the max number of pods per node on an existing cluster
https://learn.microsoft.com/fi-fi/azure/aks/networking-overview#configure-maximum---existing-clusters
You will need to redeploy a new cluster and specify the new max number during provisioning.
Late to the party, but with different solution.
As of June 2021 there is no need to redeploy the cluster - we can add a new node pool with required pods to node ratio.
The dialog box allows us to set new values and then we can use node labels to redirect pods or just shut down previous pool.
You can create a new nodepool and set max-pods:
az aks nodepool add --name mypool --node-vm-size Standard_E4s_v3 --node-count 1 -g AKS-resource-name --cluster-name cluster-name --max-pods 100
https://learn.microsoft.com/en-us/cli/azure/aks/nodepool?view=azure-cli-latest
No one mentionned it so I will
Be careful if you are using Azure CNI and want to set a high max-pods per node
Azure CNI will reserve the number of IP per node in your subnet, so this value should be calculated to avoid facing IP exhaustion issues
See https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni
I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post
Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.
For end to end devops automation I want to have an environment on demand. For this I need to Spun up and environment on kubernetes which is eventually hosted on GCP.
My Use case
1. Developer Checks in the code in feature branch
2. Environment in Spun up on Google Cloud with Kubernetes
3. Application gets deployed on Kubernetes
4. Gets tested and then the environment gets destroyed.
I am able to do everything with Spinnaker except #2. i.e create Kube Cluster on GCP using Spinnaker.
Any help please
Thanks,
Amol
I'm not sure Spinnaker was meant for doing what the second point in your list. Spinnaker assumes a collection of resources (VM's or a Kubernetes cluster) and then works with that. So instead of spinning up a new GKE cluster Spinnaker makes use of existing clusters. I think it'd be better (for you costs as well ;) if you seperate the environments using Kubernetes namespaces.