How to get an existing EKS cluster's OIDCProvider using CDK? - amazon-eks

I have an existing EKS cluster (Created by a separate CF stack) and I want to extract the OIDCProviderURL associated with that cluster using CDK.
Here's my snippet of code
const k8sCluster = <eks.Cluster>(eks.Cluster.fromClusterAttributes(this, "k8scluster", {
clusterName: "k8s-sample"
}))
const oidcprovider = k8sCluster.clusterOpenIdConnectIssuerUrl
When I execute cdk synth the oidcprovider value is undefined. The documentation for Cluster.fromClusterAttributes mentions that the output would be "undefined" if the cluster is not kubectl-enabled. I am not sure what is meant by kubectl-enabled. Can anyone let me know how I can get the k8s cluster's OIDC provider using CDK.

I have a very simple cluster definition here . CFN returns the oidc issue url for the cluster with both the output values I have defined there.
cdk.CfnOutput(
self,
"oidcendpointurl",
value=_cluster.cluster_open_id_connect_issuer_url
)
cdk.CfnOutput(
self,
"oidcendpoint",
value=_cluster.cluster_open_id_connect_issuer
)

Related

error creating Application AutoScaling Target: ValidationException: Unsupported service namespace, resource type or scalable dimension

I'm trying to enable ECS autoscaling for some Fargate services and run into the error in the title:
error creating Application AutoScaling Target: ValidationException: Unsupported service namespace, resource type or scalable dimension
The error happens on line 4 here:
resource "aws_appautoscaling_target" "autoscaling" {
max_capacity = var.max_capacity
min_capacity = 1
resource_id = var.resource_id
// <snip... a bunch of other vars not relevant to question>
I call the custom autoscaling module like so:
module "myservice_autoscaling" {
source = "../autoscaling"
resource_id = aws_ecs_service.myservice_worker.id
// <snip... a bunch of other vars not relevant to question>
My service is a normal ECS service block starting with:
resource "aws_ecs_service" "myservice_worker" {
After poking around online, I thought maybe I should construct the "service/clusterName/serviceName" sort of "manually", like so:
resource_id = "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}"
But that leads to a different error:
The argument "cluster_name" is required, but no definition was found.
I created cluster_name in my calling module (i.e. myservice ECS stuff that calls my new autoscaling module) variables.tf. And I have cluster_name in the outputs.tf of our cluster module where we're setting up the ECS cluster. I must be missing some linking still.
Any ideas? Thanks!
Edit: here's the solution that got it working for me
Yes, you do need to construct the resource_id in the form of "service/yourClusterName/yourServiceName". Mine ended up looking like: "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}"
You need to make sure you have access to the cluster name and service name variables. In my case, though I had the variable defined in my ECS service's variables.tf, and I added it my cluster module's outputs.tf, I was failing to pass down from the root module to the service module. This fixed that:
module "myservice" {
source = "./modules/myservice"
cluster_name = module.cluster.cluster_name // the line I added
(the preceding snippet goes in the main.tf of your root module (a level above your service module)
You are on the right track constructing the "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}" string. It looks like you simply aren't referencing the cluster name correctly.
And I have cluster_name in the outputs.tf of our cluster module
So you need to reference that module output, instead of referencing a not-existent variable:
"service/${module.my_cluster_module.cluster_name}/${aws_ecs_service.myservice_worker.name}"
Change "my_cluster_module" to whatever name you gave the module that is creating your ECS cluster.

Fluentbit Cloudwatch templating with EKS and Fargate

I've an EKS cluster purely on Fargate and I'm trying to setup the logging to Cloudwatch.
I've a lot of [OUTPUT] sections that can be unified using some variables. I'd like to unify the logs of each deployment to a single log_stream and separate the log_stream by environment (name_space). Using a couple of variable I'd need just to write a single [OUTPUT] section.
For what I understand the new Fluentbit plugin: cloudwatch_logs doesn't support templating, but the old plugin cloudwatch does.
I've tried to setup a section like in the documentation example:
[OUTPUT]
Name cloudwatch
Match *container_name*
region us-east-1
log_group_name /eks/$(kubernetes['namespace_name'])
log_stream_name test_stream
auto_create_group on
This generates a log_group called fluentbit-default that according to the README.md is the fallback name in case the variables are not parsed.
The old plugin cloudwatch is supported (but not mentioned in AWS documentation) because if I replace the variable $(kubernetes['namespace_name']) with any string it works perfectly.
Fluentbit in Fargate manages automatically the INPUT section so I don't really know which variables are sent to the OUTPUT section, I suppose the variable kubernetes is not there or it has a different name or a different array structure.
So my questions are:
Is there a way to get the list of the variables (or input) that Fargate + Fluentbit are generating?
Get I solve that in a different way? (I don't want to write more than 30 different OUTPUT one for each service/log_stream_name. It would be also difficult to maintain it)
Thanks!
After few days of tests, I've realised that you need to enable the kubernetes filter to receive the kubernetes variables to the cloudwatch plugin.
This is the result, and now I can generate log_group depending on the environment label and log_stream depending of the container-namespace names.
filters.conf: |
[FILTER]
Name kubernetes
Match *
Merge_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
output.conf: |
[OUTPUT]
Name cloudwatch
Match *
region eu-west-2
log_group_name /aws/eks/cluster/$(kubernetes['labels']['app.environment'])
log_stream_name $(kubernetes['namespace_name'])-$(kubernetes['container_name'])
default_log_group_name /aws/eks/cluster/others
auto_create_group true
log_key log
Please note that the app.environment is not a "standard" value, I've added it to all my deployments. The default_log_group_name is necessary in case that value is not present.
Please note also that if you use log_retention_days and new_log_group_tags the system is not going to work. To be honest log_retention_days it never worked for me also using the new cloudwatch_logs plugin either.

Init CodeCommit repository with seed-code stored in S3 using CDK

I'm trying to convert the MLOps template for model building, training, and deployment CloudFormation template into a CDK project so I can easily update the definitions, synth the template and upload it into CloudCatalog in order to be used as a project template in SageMaker Studio.
I'm quite new to CDK though, and I'm having some troubles trying to initialize a CodeCommit repository with the sagemaker pipeline seed-code stored in S3, which was accomplished as follows in the original template :
'ModelBuildCodeCommitRepository':
'Type': 'AWS::CodeCommit::Repository'
'Properties':
'RepositoryName':
'Fn::Sub': 'sagemaker-${SageMakerProjectName}-${SageMakerProjectId}-modelbuild'
'RepositoryDescription':
'Fn::Sub': 'SageMaker Model building workflow infrastructure as code for the
Project ${SageMakerProjectName}'
'Code':
'S3':
'Bucket': 'sagemaker-servicecatalog-seedcode-sa-east-1'
'Key': 'toolchain/model-building-workflow-v1.0.zip'
'BranchName': 'main'
The CDK API docs does refer to the code parameter in codecommit.Repository as an initialization option, but it's only for local files being compressed and uploaded to S3 and such. That's because it assumes a deployment of the CDK project, but I only want the template generated by cdk synth.
Of course I can always use codecommit.CfnRepository and its code parameter to point into S3, but then I cannot insert it in the codepipeline's stage codepipeline_actions.CodeCommitSourceAction's repository parameter because it expects an IRepository object.
I also want to stick to aws-cdk-lib.aws_codepipeline to grasp the fundamental logic of CloudPipeline (which I'm quite new too) and avoid using the high level aws-cdk-lib.pipelines.
Any ideas on how can I accomplish this?
Construct a Repository without a Code prop. Get an escape hatch reference to its L1 CfnRepository layer. Set the CfnRepository's property manually to the existing S3 bucket:
const repo = new codecommit.Repository(this, 'Repo', { repositoryName: 'my-great-repo' });
const cfnRepo = repo.node.defaultChild as codecommit.CfnRepository;
cfnRepo.addPropertyOverride('Code', {
S3: {
Bucket: 'sagemaker-servicecatalog-seedcode-sa-east-1',
Key: 'toolchain/model-building-workflow-v1.0.zip',
},
BranchName: 'main',
});
The above code will synth the YAML output in the OP. Pass repo as the pipeline's source action.
Don't forget to grant the necessary IAM permissions on the S3 bucket.

How to add a fargate profile to an existing cluster with CDK

I want to add a new fargate profile to an existing eks cluster.
The cluster is created in another Stack and in my tenant specific stack I am importing my eks cluster via attributes.
self.cluster: Cluster = Cluster.from_cluster_attributes(
self, 'cluster', cluster_name=cluster,
open_id_connect_provider=eks_open_id_connect_provider,
kubectl_role_arn=kubectl_role
)
The error is:
Object of type #aws-cdk/core.Resource is not convertible to #aws-cdk/aws-eks.Cluster
and it is appearing on this line here
FargateProfile(self, f"tenant-{self.tenant}", cluster=self.cluster, selectors=[Selector(namespace=self.tenant)])
If I try calling
self.cluster.add_fargate_profile(f"tenant-{self.tenant}", selectors=[Selector(namespace=self.tenant)])
I get the error that the object self.cluster does not have the attribute add_fargate_profile
While you might think that something is of with importing the cluster, adding manifests and helm charts work just fine.
self.cluster.add_manifest(...) <-- this is working
This is not currently possible in CDK.
As per the docs, eks.Cluster.fromClusterAttributes returns an ICluster, while FargateProfile expects a Cluster explicitly.
A FargateCluster can only currently be created in CDK, not imported.

Not able to create node level local actors in Akka.Net cluster

We are trying to create couple of node level actors [pool routers] for app level administration, local routing and throttling purposes.
Node specific role is mentioned as target role for these actors for STRICTLY local routing.
Below is the sample code and hocon.
//// In App Start - Actor is initialized and stored in static container
var props = Props.Create(() => new ThrottlerActor()).WithRouter(FromConfig.Instance);
actorSystem.ActorOf(props, "ThrottlerActor");
## hocon ##
/ThrottlerActor{
router = round-robin-pool
nr-of-instances = 100
cluster {
enabled = on
allow-local-routees = on
max-nr-of-instances-per-node = 10
use-role = node1
}
}
But when we send message to this actor, it behaves like a cluster actor. It redirects the n+1th [n = max-nr-of-instances-per-node] message to the similar actor in different node.
It looks like as if the role setting was ignored.
We even tried disabling clustering [cluster -> enabled = off AND also by removing cluster configuration from hocon]. But it didn't work. The moment this router is created below user guardian, the actor behaves as if it is a cluster actor.
Please advise.
We even tried disabling clustering [cluster -> enabled = off AND also by removing cluster configuration from hocon]. But it didn't work. The moment this router is created below user guardian, the actor behaves as if it is a cluster actor.
So this smells to me like your HOCON isn't being loaded correctly. You can't have a router that routes to cluster routees on other nodes with cluster.enabled = off inside its deployment. The code needed to listen to the cluster in the first place gets elided with that off.
Try removing the cluster section in its entirety and work backwards. Your issue here seems to be which config is being loaded / where it's coming from - not a bug with Akka.NET.