Unable to create AKS cluster in westeurope location - azure-container-service

Trying to setup an AKS cluster using this guide in the westeurope location but it keeps failing at this step.
When executing this command az aks create --location westeurope --resource-group <myResourceGroup> --name <myAKSCluster> --node-count 1 --generate-ssh-keys
I continuously get the following error message:
Operation failed with status: 'Bad Request'. Details: The VM size of Agent is not allowed in your subscription in location 'westeurope'. Agent VM size 'Standard_DS1_v2' is available in locations: australiaeast,australiasoutheast,brazilsouth,canadacentral,canadaeast,centralindia,centralus,centraluseuap,eastasia,eastus,eastus2euap,japaneast,japanwest,koreacentral,koreasouth,northcentralus,northeurope,southcentralus,southindia,uksouth,ukwest,westcentralus,westindia,westus,westus2.
Even when I explicitly set the VM size to a different type of VM I still get a similar error. For example:
az aks create --location westeurope --resource-group <myResourceGroup> --name <myAKSCluster> --node-vm-size Standard_B1s --node-count 1 --generate-ssh-keys
results in:
Operation failed with status: 'Bad Request'. Details: The VM size of Agent is not allowed in your subscription in location 'westeurope'. Agent VM size 'Standard_B1s' is available in locations: australiaeast,australiasoutheast,brazilsouth,canadacentral,canadaeast,centralindia,centralus,centraluseuap,eastasia,eastus,eastus2euap,japaneast,japanwest,koreacentral,koreasouth,northcentralus,northeurope,southcentralus,southindia,uksouth,ukwest,westcentralus,westindia,westus,westus2.
It looks likes creating an AKS cluster in westeurope is forbidden / not possible at all. Anybody created a cluster in this location succesfully?

This is a common problem atm for westeurope, looks like a Bug in Azure AKS. The VM's can be created through "Virtual machines" but not AKS.
Here is a different thread on this topic: https://github.com/Azure/AKS/issues/280

you just need to add --node-vm-size Standard_D2s_v3 in you command. It resolved my issue.
Note: It is to be noted that you need to pass Standard_D2s_v3 according to your region like my region WestUS supports Standard_d16ads_v5. The above command in the question will return the available vm sizes in the exception.

Related

Trivy Scan with Openshift internal registry | how to authenticate against openshift registry with trivy

I am currently using the trivy scanner to scan images in the pipeline. This has worked very well until now. But recently it is necessary to scan the image from an internal Openshift registry.
Unfortunately I have the problem that I do not know how to authenticate trivy against the internal registry. The documentation does not give any information regarding Openshift. It describes Azure and AWS as well as github.
My scan command currently looks like this in groovy:
trivy image --ignore-unfixed --format template --template \"path for output" --output trivy_image_report.html --skip-update --offline-scan $image
Output:
INFO Vulnerability scanning is enabled
INFO Secret scanning is enabled
INFO If your scanning is slow, please try '--security-checks vuln' to disable secret scanning
INFO Please see also https://aquasecurity.github.io/trivy/v0.31.3/docs/secret/scanning/#recommendation for faster secret detection
FATAL image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
* unable to inspect the image (openshiftregistry/namespace/imagestream:tag): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
* containerd socket not found: /run/containerd/containerd.sock
* GET https://openshiftregistry/v2/namespace/imagestream/manifests/tag: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/imagestream Type:repository]]
The image is stored within an imageStream in Openshift. Is there something i can add to the trivy command to authenticate the service against the registry or is there something else what has to be done before i use the command in groovy?
Thanks for help
Thanks to Will Gordon in the comments. This link was very helpfull: Access the Registry (Openshift).
This lines helped me (more information can be found on the linked site):
oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
And
podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Thanks

How connect to MSK cluster from EKS cluster

I am having difficulties connecting to my MSK cluster from my EKS cluster even though both clusters share the same VPC and the same subnets.
The security group used by the MSK cluster has the following inbound rules
type
protocol
port range
source
all traffic
all
all
custom
SG_ID
all traffic
all
all
anywhere ipv4
0.0.0.0/0
Where SG_ID is the EKS' Cluster security group.
The one labeled: EKS created security group applied...
In the EKS cluster, I am using the following commands to test connectivity:
kubectl run kafka-consumer \
-ti \
--image=quay.io/strimzi/kafka:latest-kafka-2.8.1 \
--rm=true \
--restart=Never \
-- bin/kafka-topics.sh --create --topic test --bootstrap-server b-1.test.z35y0w.c4.kafka.us-east-1.amazonaws.com:9092 --replication-factor 2 --partitions 1 --if-not-exists
With the following result
Error while executing topic command : Call(callName=createTopics, deadlineMs=1635906680860, tries=1, nextAllowedTryMs=1635906680961) timed out at 1635906680861 after 1 attempt(s)
[2021-11-03 02:31:20,865] ERROR org.apache.kafka.common.errors.TimeoutException: Call(callName=createTopics, deadlineMs=1635906680860, tries=1, nextAllowedTryMs=1635906680961) timed out at 1635906680861 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
(kafka.admin.TopicCommand$)
pod "kafka-consumer" deleted
pod default/kafka-consumer terminated (Error)
Sadly, the second bootstrap server displayed on the MSK Page gives the same result.
nc eventually times out
kubectl run busybox -ti --image=busybox --rm=true --restart=Never -- nc b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com
nslookup fails as well
kubectl run busybox -ti --image=busybox --rm=true --restart=Never -- nslookup b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com
If you don't see a command prompt, try pressing enter.
*** Can't find b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com: No answer
Could anyone please give me a hint?
Thanks
I need to connect MSK from my EKS pod. So I searched this doc, I want to share my solution, hope can help others:
This my config file:
root#kain:~/work# cat kafkaconfig
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
This is my command:
./kafka-topics.sh --list --bootstrap-server <My MSK bootstrap server>:9098 --command-config ./kafkaconfig
For this command, there are 2 preconditions we need to make sure,
one is you have access to aws msk, (I access MSK from my eks pod, and my eks pod has OIDC to access the AWS).
Second is we need to has AWS auth jar file: aws-msk-iam-auth.jar
address: https://github.com/aws/aws-msk-iam-auth/releases
put it to kafkaclient libs directory or export CLASSPATH=/aws-msk-iam-auth-1.1.4-all.jar
reference doc: https://aws.amazon.com/blogs/big-data/securing-apache-kafka-is-easy-and-familiar-with-iam-access-control-for-amazon-msk/

How to access eks cluster from local machine

I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.
Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

SSH into Kubernetes cluster running on Amazon

Created a 2 node Kubernetes cluster as:
KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
This shows the output as:
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
I can see the instances in EC2 console. How do I ssh into the master node?
Here is the exact command that worked for me:
ssh -i ~/.ssh/kube_aws_rsa admin#<masterip>
kube_aws_rsa is the default key generated, otherwise controlled with AWS_SSH_KEY environment variable. For AWS, it is specified in the file cluster/aws/config-default.sh.
More details about the cluster can be found using kubectl.sh config view.
"Creates an AWS SSH key named kubernetes-. Fingerprint here is the OpenSSH key fingerprint, so that multiple users can run the script with different keys and their keys will not collide (with near-certainty). It will use an existing key if one is found at AWS_SSH_KEY, otherwise it will create one there. (With the default Ubuntu images, if you have to SSH in: the user is ubuntu and that user can sudo"
https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md
You should see the ssh key-fingerprint locally in ssh config or set the ENV and recreate.
If you are throwing up your cluster on AWS with kops, and use CoreOS as your image, then the login name would be "core".