AKS create failed - azure-container-service

I'm trying to use azure container service but when I run
az aks create --resource-group DockerEEcluster --name myK8sCluster --node-count 1 --generate-ssh-keys
I get the following error
Deployment failed. Correlation ID: 9ccd3cf6-02f2-4147-9333-c50ba948e248. Timeout while polling for control plane provisioning status
The resource group location is West Europe.
Any help?

Related

EMR serverless cannot connect to s3 in another region

I have an EMR serverless app that cannot connect to S3 bucket in another region. Is there a workaround for that? Maybe a parameter to set in Job parameters or Spark parameters when submitting a new job.
The error is this:
ExitCode: 1. Last few exceptions: Caused by: java.net.SocketTimeoutException: connect timed out Caused by: com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.conn.ConnectTimeoutException
In order to connect to an S3 bucket in another region or access external services, the EMR Serverless application needs to be created with a VPC.
This is mentioned on the considerations page:
Without VPC connectivity, a job can access some AWS service endpoints in the same AWS Region. These services include Amazon S3, AWS Glue, Amazon DynamoDB, Amazon CloudWatch, AWS KMS, and AWS Secrets Manager.
Here's an example AWS CLI command to create an application in a VPC - you need to provide a list of Subnet IDs and Security Group IDs. More details can be found in configuring VPC access.
aws emr-serverless create-application \
--type SPARK \
--name etl-jobs \
--release-label "emr-6.6.0" \
--network-configuration '{
"subnetIds": ["subnet-01234567890abcdef","subnet-01234567890abcded"],
"securityGroupIds": ["sg-01234566889aabbcc"]
}'

OMS Agent for Kubernetes deploying into kube-system namespace

The solution to deploy the OMS Agent into a Kubernetes (OpenShift) cluster is creating a Replica Set and Daemon Set in the kube-system namespace. I am using the Helm method to deploy and specifying the -n parameter gets ignored.
I suppose it's not a massive problem but I would like to keep thing tidy and easily identifiable namespaces as we are a large team.
https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-hybrid-setup
OMS agent is the managed component, it should be part of kube-system namespace. If moved to any namespace there will be issues for data collection and monitoring.This is by design at the moment because we use the same agent for AKS and non-AKS.

Spinnaker AWS Provider not allowing create cluster

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/

Creating Docker Swarm from Cloud in Azure failes

Creating a Docker Swarm from docker cloud failes every time in creation.
Error Log docker Cloud:
https://gist.github.com/254813f30376c4ef1da20f320b29f815.git
Error Log Azure Portal
https://i.imgur.com/LvBwauB.png
Extended Error Log in Azure:
https://gist.github.com/c95bc0e24129c43341d874397609f550.git
The error log indicates that you put an invalid SSH public key. It should be something like ssh-rsa ...

Failing to scale an ACS cluster due to missing ServicePrincipalProfile

I'm trying to scale an ACS cluster that is running k8s. From the Azure CLI I get the error below, and the Azure Portal results in a similar error message. It seems somehow my k8s cluster isn't setup with a Service Principal correctly?
"ServicePrincipalProfile must be specified with Orchestrator
Kubernetes"
I find this odd because I did use the az ad sp create-for-rbac command to create service principal for the subscription. I then used the resulting appId and password with the az acs create command (in the --service-principal and --client-secret options).
Example:
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/my-subscription-guid"
az acs create -n=myk8skube -g=myresgrp --orchestrator-type=kubernetes --agent-count=2 --generate-ssh-keys --windows --admin-username=myuser --admin-password=mypassword --service-principal=appId --client-secret=password
The cluster is running fine, I can scale pods, but I can't scale nodes. How did I get in this state and more importantly how do I fix it?
There was an issue for scale of ACS cluster in preview regions. The fix rolled out across the world by 8/31/2017.