Adding vnet rule through AZ cli fails with 500 errors - azure-container-service

I'm experimenting with Azure AKS, and I'm running into problems with adding a vnet rule for my SQL server via the Azure CLI. It dies with an error:
Error occurred in request., RetryError:
HTTPSConnectionPool(host='management.azure.com', port=443): Max
retries exceeded with url:
/subscriptions/...path omitted.../mysql/virtualNetworkRules/my-vnet-rule?api-version=2015-05-01-preview
(Caused by ResponseError('too many 500 error responses',))
This is what I've done so far:
az group create --name myrg --location centralus
az aks create -n mycluster -g myrg --generate-ssh-keys
az aks get-credentials -g myrg -n mycluster
az sql server create --name mysql -g myrg --location centralus --admin-user myuser --admin-password mypassword
at this point I end up with two RGs, one named "myrg" and one named "CM_myrg_mycluster_centralus". My SQL server is in "myrg" and there is a vnet "aks-vnet-1234567" in CM_*. The vnet contains a subnet "aks-subnet".
I then try to add the vnet rule:
az sql server vnet-rule create --name my-vnet-rule --server mysql --vnet "MC_myrg_mycluster_centralus/aks-vnet" -g mygroup --subnet "aks-subnet"
And get the error above.
I also tried specifying the vnet including the number postfix (e.g. aks-vnet-1234567) but same error.
This probably means I'm not using the right syntax somewhere. Could someone clarify?
AZ CLI 2.0.21
Linux (Ubuntu)

I solved it this way:
Before this can be done, I needed to add sql to service endpoints:
az network vnet subnet update -n aks-subnet -g myrg --vnet-name aks-vnet-xxx --service-endpoints "Microsoft.Sql"
Reworked the query to use --subnet ID instead of --subnet NAME and --vnet-name. It should probably be doable using the previous syntax as well.
Vnet-name will be something like /subscriptions/.../resourceGroups.../aks-subnet
Your rule should now be created. You can also use -i to ignore service endpoints during the rule creation, but i believe that will end up with a disabled rule.

Related

How connect to MSK cluster from EKS cluster

I am having difficulties connecting to my MSK cluster from my EKS cluster even though both clusters share the same VPC and the same subnets.
The security group used by the MSK cluster has the following inbound rules
type
protocol
port range
source
all traffic
all
all
custom
SG_ID
all traffic
all
all
anywhere ipv4
0.0.0.0/0
Where SG_ID is the EKS' Cluster security group.
The one labeled: EKS created security group applied...
In the EKS cluster, I am using the following commands to test connectivity:
kubectl run kafka-consumer \
-ti \
--image=quay.io/strimzi/kafka:latest-kafka-2.8.1 \
--rm=true \
--restart=Never \
-- bin/kafka-topics.sh --create --topic test --bootstrap-server b-1.test.z35y0w.c4.kafka.us-east-1.amazonaws.com:9092 --replication-factor 2 --partitions 1 --if-not-exists
With the following result
Error while executing topic command : Call(callName=createTopics, deadlineMs=1635906680860, tries=1, nextAllowedTryMs=1635906680961) timed out at 1635906680861 after 1 attempt(s)
[2021-11-03 02:31:20,865] ERROR org.apache.kafka.common.errors.TimeoutException: Call(callName=createTopics, deadlineMs=1635906680860, tries=1, nextAllowedTryMs=1635906680961) timed out at 1635906680861 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
(kafka.admin.TopicCommand$)
pod "kafka-consumer" deleted
pod default/kafka-consumer terminated (Error)
Sadly, the second bootstrap server displayed on the MSK Page gives the same result.
nc eventually times out
kubectl run busybox -ti --image=busybox --rm=true --restart=Never -- nc b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com
nslookup fails as well
kubectl run busybox -ti --image=busybox --rm=true --restart=Never -- nslookup b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com
If you don't see a command prompt, try pressing enter.
*** Can't find b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com: No answer
Could anyone please give me a hint?
Thanks
I need to connect MSK from my EKS pod. So I searched this doc, I want to share my solution, hope can help others:
This my config file:
root#kain:~/work# cat kafkaconfig
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
This is my command:
./kafka-topics.sh --list --bootstrap-server <My MSK bootstrap server>:9098 --command-config ./kafkaconfig
For this command, there are 2 preconditions we need to make sure,
one is you have access to aws msk, (I access MSK from my eks pod, and my eks pod has OIDC to access the AWS).
Second is we need to has AWS auth jar file: aws-msk-iam-auth.jar
address: https://github.com/aws/aws-msk-iam-auth/releases
put it to kafkaclient libs directory or export CLASSPATH=/aws-msk-iam-auth-1.1.4-all.jar
reference doc: https://aws.amazon.com/blogs/big-data/securing-apache-kafka-is-easy-and-familiar-with-iam-access-control-for-amazon-msk/

How do I get to my spinnaker dashboard after Installing minnaker on my aws ec2

I installed spinnaker on my AWS EC2, login into the dashboard in the first time but immediately after I logout and login again using the same base URL i am being directed to a different person github account, what might have happened, does it mean my account is hacked or what, somebody advise please.
Being directed to the link attached below, instead of the ip address taking me to the spinnaker dashboard and yet I am using the correct base address
These are the instructions i follow for Minnaker on EC2 (ap-southeast-2)
Pre-requisites
Obtain an AWS Elastic IP
From AWS EC2 console choose a Region preferably ap-southeast-2 and
launch an EC2 instance with 16 GB memory, 4 cpu min and 60 GB disk.
An initial deployment can be performed using instance= m4.xlarge
Attach the AWS Elastic IP to the Spinnaker Instance
Access the instance through SSH
Get minnaker
curl -LO https://github.com/armory/minnaker/releases/latest/download/minnaker.tgz
Untar
tar -xzvf minnaker.tgz
Go to minnaker directory
cd minnaker
Use the Public IP value from The Elastic IP as the $PUBLIC_IP
Obtain Private IP of the instance hostname -I and add them to local environment variables $PRIVATE_IP
export PRIVATE_IP=$(hostname -I)
export PUBLIC_IP=AWS_ELASTIC_IP
Execute the command below to install Open Source Spinnaker
./scripts/install.sh -o -P $PRIVATE_IP
Validate installation
UI
Validate installation going to generated URL https://PUBLIC_IP
Use user admin and get the password at etc/spinnaker/.hal/.secret/spinnaker_password
The UI should load
Kubernetes Deployment
Minnaker is deployed inside an EC2 as a lightweight Kubernetes K3S cluster
Run kubectl version
Get info from cluster kubectl cluster-info
Tweak bash completion and enable a simple alias.
kubectl completion bash
kubectl completion bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
echo 'alias k=kubectl' >>~/.bashrc
`echo 'complete -F __start_kubectl k' >>~/.bashrc
Validate Spinnaker is running
k -n spinnaker get pods -o wide
Halyard Config
Validate a default halyard config is been set up
sudo chmod 755 /usr/local/bin/hal
#!/bin/bash
set -x
HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
k -n spinnaker exec -it ${HAYLYARD} -- hal $# config
Minnaker repo
Clone the repository
Go to Scripts directory cd minnaker/scripts
Add permissions to the installation script chmod 775 all.sh
git clone https://github.com/armory/minnaker
References
armory/minnaker

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

Unable to create AKS cluster in westeurope location

Trying to setup an AKS cluster using this guide in the westeurope location but it keeps failing at this step.
When executing this command az aks create --location westeurope --resource-group <myResourceGroup> --name <myAKSCluster> --node-count 1 --generate-ssh-keys
I continuously get the following error message:
Operation failed with status: 'Bad Request'. Details: The VM size of Agent is not allowed in your subscription in location 'westeurope'. Agent VM size 'Standard_DS1_v2' is available in locations: australiaeast,australiasoutheast,brazilsouth,canadacentral,canadaeast,centralindia,centralus,centraluseuap,eastasia,eastus,eastus2euap,japaneast,japanwest,koreacentral,koreasouth,northcentralus,northeurope,southcentralus,southindia,uksouth,ukwest,westcentralus,westindia,westus,westus2.
Even when I explicitly set the VM size to a different type of VM I still get a similar error. For example:
az aks create --location westeurope --resource-group <myResourceGroup> --name <myAKSCluster> --node-vm-size Standard_B1s --node-count 1 --generate-ssh-keys
results in:
Operation failed with status: 'Bad Request'. Details: The VM size of Agent is not allowed in your subscription in location 'westeurope'. Agent VM size 'Standard_B1s' is available in locations: australiaeast,australiasoutheast,brazilsouth,canadacentral,canadaeast,centralindia,centralus,centraluseuap,eastasia,eastus,eastus2euap,japaneast,japanwest,koreacentral,koreasouth,northcentralus,northeurope,southcentralus,southindia,uksouth,ukwest,westcentralus,westindia,westus,westus2.
It looks likes creating an AKS cluster in westeurope is forbidden / not possible at all. Anybody created a cluster in this location succesfully?
This is a common problem atm for westeurope, looks like a Bug in Azure AKS. The VM's can be created through "Virtual machines" but not AKS.
Here is a different thread on this topic: https://github.com/Azure/AKS/issues/280
you just need to add --node-vm-size Standard_D2s_v3 in you command. It resolved my issue.
Note: It is to be noted that you need to pass Standard_D2s_v3 according to your region like my region WestUS supports Standard_d16ads_v5. The above command in the question will return the available vm sizes in the exception.

Azure ACS - Kubernetes inter-pod communication

I've made an ACS instance.
az acs create --orchestrator-type=kubernetes \
--resource-group $group \
--name $k8s_name \
--dns-prefix $kubernetes_server \
--generate-ssh-keys
az acs kubernetes get-credentials --resource-group $group --name $k8s_name
And run helm init it has provisioned tiller pod fine. I then ran helm install stable/redis and got a redis deployment up and running (seemingly).
I can kube exec -it into the redis pod, and can see it's binding on 0.0.0.0 and can log in with redis-cli -h localhost and redis-cli -h <pod_ip>, but not redis-cli -h <service_ip> (from kubectl get svc.)
If I run up another pod (which is how I ran into this issue) I can ping redis.default and it shows the DNS resolving to the correct service IP but gives no response. When I telnet <service_ip> 6379 or redis-cli -h <service_ip> it hangs indefinitely.
I'm at a bit of a loss as to how to debug further. I can't ssh into the node to see what docker is doing.
Also, I'd initially tried this with a standard Alphine-Redis image, so the helm was a fallback. I tried it yesterday and the helm one worked, but the manual one didn't. Today doing it (on a newly built ACS cluster) it's not working at all on either.
I'm going to spin up the cluster again to see if its a stable reproduce, but I'm pretty confident something fishy is going on.
PS - I have a VNet with overlapping subnet 10.0.0.0/16 in a different region, when I go into the address range I do get a warning there that there is a clash, could that affect it?
<EDIT>
Some new insight... It's something to do with alpine based images (which we've been aiming to use)...
So kube run a --image=nginx (which is ubuntu based) and I can shell in, install telnet and connect to the redis service.
But, e.g. kubectl run c --image=rlesouef/alpine-redis then shell in, and telnet doesn't work to the same redis service.
</EDIT>
There was a similar issue https://github.com/Azure/acs-engine/issues/539 that has been fixed recently. One thing to verify is to check if nslookup works in the container.