how do i upgrade eks default nodegroup version using cdk? - amazon-eks

Please note this question is a different question than this one. The custom eks node group is defined by me and exposes configurations for me to modify. This question is specifically regarding the default node group, which lack any props or exposed configuration for me to modify.
I would like to upgrade the default node group that was created by CDK when I provisioned the EKS cluster.
this.cluster = new eks.Cluster(this, 'eks-cluster', {
vpc: props.vpc,
clusterName: props.clusterName,
version: eks.KubernetesVersion.V1_22,
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
defaultCapacity: 5,
});
However, I do not see any options to modify the version for the default node group. I have already bump the cluster version to v1.22 as well as my custom node groups, but the default node group still uses v1.21.
How can I upgrade the default node group version using CDK?

From my experiments and observations, there is nothing special with the default node group that EKS creates, this.cluster.defaultNodegroup. It is just a typical EKS node group with auto-scaling and no taints. I created my own default node group and used a updated AMI release version.
I disabled the defaultCapacity and set it to 0, and then created my own custom node group with same configurations as the one EKS created by default.
this.cluster = new eks.Cluster(this, 'eks-cluster', {
vpc: props.vpc,
clusterName: props.clusterName,
version: eks.KubernetesVersion.V1_23,
kubectlLayer: new KubectlV23Layer(this, 'kubectl'),
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
// set this to 0 to disable default node group created by EKS
defaultCapacity: 0,
});
const defaultNodeGroup = new eks.Nodegroup(this, 'default-node-group', {
cluster: this.cluster,
releaseVersion: '<updated AMI release version>',
nodegroupName: 'eks-default-nodegroup',
});

Related

Engine_version: Redis versions must match <major>.x when using version 6 or higher, or <major>.<minor>.<bug-fix>

I have the following elasticache resource:
resource "aws_elasticache_subnet_group" "main" {
name = "${var.identifier}-sng"
subnet_ids = var.subnet_ids
}
resource "aws_elasticache_cluster" "main" {
cluster_id = var.identifier
engine = "redis"
node_type = var.node_type
num_cache_nodes = var.nodes_count
parameter_group_name = var.parameter_group_name
engine_version = var.engine_version
port = 6379
security_group_ids = var.security_group_ids
subnet_group_name = aws_elasticache_subnet_group.main.name
tags = {
"redis" = "Auto managed by TF"
}
}
I run with aws elasticache Redis 6.0.5 and my var.engine_version is set with 6.0.5 too. It worked quite well until I've upgraded from terraform 1.3 to 1.4 I received the following error:
engine_version: Redis versions must match <major>.x when using version 6 or higher,
or <major>.<minor>.<bug-fix>
Is there anyone experiencing this issue after upgrading? what would be a solution to work around this problem?
Just ran into this problem and I was able to fix by setting parameter_group_name family to 6.x and engine_version to 6.0. When I set the engine version to 6.0.5 it threw the error you listed above. The 6.0 engine version defaults to 6.0.5
I was using elasticache redis 6.2.6 and 7.0.4 for 2 different projects.
To make it work I had to set the engine_versions 6.2 and 7.0 respectively.

Problem reading some plugins: duplicate_plugin

I have three node rabbit MQ cluster deployed in Kubernetes following 1. Server startup and running. But I got continuous error log as follows.
Problem reading some plugins: [{"/opt/rabbitmq/plugins/prometheus-4.3.0.ez",
duplicate_plugin}]
Problem reading some plugins: [{"/opt/rabbitmq/plugins/prometheus-4.3.0.ez",
duplicate_plugin}]
Problem reading some plugins: [{"/opt/rabbitmq/plugins/prometheus-4.3.0.ez",
duplicate_plugin}]
When I am checking plugins folder inside pod there are two prometheus versions as follows.
I removed prometheus-4.3.0.ez from plugins folder and again checked logs. Then error log not appear.
Image tag : 3.8
How I solve this issue. Is this effect to functions of rabbit-mq server ?
At least continuous log should remove, because we export logs to google cloud log storage. So log storage size and cost increasing rapidly.
1. https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/rabbitmq/README.md#installation
There should not be a problem deleting prometheus-4.3.0.ez from the plugins folder. The image itself needs to be updated not to add that plugin as rabbitmq now includes a version of that plugin by default.

Vue CLI 3 service worker fails to register out of box

I built my app using Vue CLI3 with PWA. When I build for production the service worker fails to register.
I then decided to check if it was something I did or Vue CLI 3 out the box. I built a brand new app, built it and deployed it to AWS s3 with cloudfront. Even the brand new app without any changes fails to register the service worker with error: "The script has an unsupported MIME type ('text/plain')." and "Error during service worker registration: DOMException"
I've tried quite a few things other than listed below that google search results suggested but I end up with the same error.
I tried using the vue.config.js to load a custom worker in which I just copied the contents of the one that vue produces in a build.
pwa: {
workboxPluginMode: 'InjectManifest',
workboxOptions: {
swSrc: 'public/service-worker.js'
},
themeColor: '#ffffff'
}
I have tried loading it from index.html also.
If I host it locally it registers without any issues
The file does get created and it's accessible from the console but for some odd reason unknown to me it does not want to register at all.
Has anyone had this problem before and how did you resolve this?
Hosted on AWS s3 & cloudfront with HTTPS enabled and using the default AWS certificates for testing.
$ vue --version
3.9.3
$ node --version
v12.7.0
$ npm --version
6.10.0
UPDATE
I found that when I upload to S3 using aws cli sync it changes all .js files content-type
Once I resolve this I will update my question again.

How to change the path for local backend state when using workspaces in terraform?

What is the expected configuration for using terraform workspaces with the local backend?
The local backend supports workspacing, but it does not appear you have much control over where the actual state is stored.
When you are not using workspaces you can supply a path parameter to the local backend to control where the state files are stored.
# Either in main.tf
terraform {
backend "local" {
path = "/path/to/terraform.tfstate
}
}
# Or as a flag
terraform init -backend-config="path=/path/to/terraform.tfstate"
I expected analogous functionality when using workspaces in that you would supply a directory for path and the workspaces would get created under that directory
For example:
terraform new workspace first
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
terraform new workspace second
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
would result in the state
/path/to/terraform.tfstate.d/first/terraform.tfstate
/path/to/terraform.tfstate.d/second/terraform.tfstate
This does not appear to be the case however. It looks like the local backend ignores the path parameter and puts the workspace configuration in the working directory.
Am I missing something or are you unable to control local backend workspace state?
There is an undocumented flag for the local backend workspace_dir that solves this issue.
The documentation task is tracked here
terraform {
backend "local" {
workspace_dir = "/path/to/terraform.tfstate.d"
}
}

How to configure applications in ami 4

The documentation says
In Amazon EMR releases 4.0 and greater, the only accepted parameter is
the application name. To pass arguments to applications, you supply a
configuration for each application.
But I cannot find an example that shows how to pass arguments in ami 4. All I can find are examples configuring exports such as below. I am trying to figure out how to set the version of Spark to use.
[
{
"Classification":"hadoop-env",
"Properties":{
},
"Configurations":[
{
"Classification":"export",
"Properties":{
"HADOOP_USER_CLASSPATH_FIRST":"true",
"HADOOP_CLASSPATH":"/path/to/my.jar"
}
}
]
}
]
You cannot set an arbitrary version of Spark to use like you could with 3.x AMI versions. Rather, the version of Spark (and other apps, of course) is determined by the release label. For example, the latest release is currently emr-5.2.1, which includes Spark 2.0.2. If you want a 1.x version of Spark, the latest version available is Spark 1.6.3 on release emr-4.8.3.