I have an application that needs to be deployed on EKS, and I'm having trouble setting up an ingress ALB.
I am using the following as a sample for how this should be set up.
https://github.com/aws-samples/nexus-oss-on-aws/blob/d3a092d72041b65ca1c09d174818b513594d3e11/src/lib/sonatype-nexus3-stack.ts#L207-L242
It's in TypeScript and I'm converting it to Python. My code is as below.
from aws_cdk import (
Stack,
aws_eks as eks,
aws_ec2 as ec2,
aws_iam as iam,
Duration
)
from constructs import Construct
class TestStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
vpc = ec2.Vpc(self, "test-vpc",
vpc_name="test-vpc",
cidr="10.0.0.0/16"
)
eks_role = iam.Role(
self, 'test-eks-role',
role_name = 'test-eks-role',
assumed_by=iam.CompositePrincipal(
iam.ServicePrincipal('eks.amazonaws.com')
),
managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name('AmazonEKSClusterPolicy')],
)
cluster = eks.Cluster(
self, "test-cluster",
cluster_name="test-cluster",
masters_role=eks_role,
version=eks.KubernetesVersion.V1_21,
vpc=vpc,
vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)]
)
alb_service_account = cluster.add_service_account(
'test-cluster-service-account',
name='test-cluster-service-account'
)
import requests
alb_controller_url = 'https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json'
policy_json = requests.get(url=alb_controller_url).json()
for statement in policy_json['Statement']:
alb_service_account.add_to_principal_policy(iam.PolicyStatement.from_json(statement))
cluster.add_helm_chart(
'aws-load-balancer-controller-helm-chart',
chart='aws-load-balancer-controller',
repository='https://aws.github.io/eks-charts',
release='aws-load-balancer-controller',
version='1.4.1',
wait=True,
timeout=Duration.minutes(15),
values={
"clusterName": cluster.cluster_name,
"image": {
"repository": "602401143452.dkr.ecr.ap-southeast-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.1",
},
"serviceAccount": {
"create": False,
"name": alb_service_account.service_account_name,
},
},
)
Right now I'm getting the following cryptic error message.
Received response status [FAILED] from custom resource. Message returned: Error: b'Error: UPGRADE FAILED: another operation (i
nstall/upgrade/rollback) is in progress\n'
Any advice would be greatly appreciated!
There is an AlbController construct available in the CDK, you could try with that and see if that works for you.
I am actually using the construct myself but am facing the same error message. There is this GitHub issue regarding the Helm error itself, however the rollback solution mentioned there is not applicable for me, there appears to be no state of the Helm release despite the error. I have raised this as an issue on the CDK repo.
Related
We are not able to create neptune cluster using this python boto3 library, and boto3 function is given below
**Functions is:**
import boto3
client = boto3.client('neptune')
response = client.create_db_cluster(
AvailabilityZones=[
'us-west-2c', 'us-west-2b',
],
BackupRetentionPeriod=1,
DatabaseName='testdcluster',
DBClusterIdentifier='testdb',
DBClusterParameterGroupName='default.neptune1',
VpcSecurityGroupIds=[
'sg-xxxxxxxxx',
],
DBSubnetGroupName='profilex',
Engine='neptune',
EngineVersion='1.0.1.0',
Port=8182,
Tags=[
{
'Key': 'purpose',
'Value': 'test'
},
],
StorageEncrypted=False,
EnableIAMDatabaseAuthentication=False,
DeletionProtection=False,
SourceRegion='us-west-2'
)
Error message is also given below
**error message :**
when calling the CreateDBCluster operation: The parameter DatabaseName is not valid for engine: neptune
could you please help to fix this ?
Rather than using DatabaseName just use DBClusterIdentifier and that will become the name of your cluster. The DatabaseName parameter is not needed when creating a Neptune cluster.
I use TerraForm as infrastructure framework in my application. Below is the configuration I use to deploy python code to lambda. It does three steps: 1. zip all dependencies and source code in a zip file; 2. upload the zipped file to s3 bucket; 3. deploy to lambda function.
But what happens is the deploy command terraform apply will fail with below error:
Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de
on config.tf line 48, in resource "aws_lambda_function" "test_lambda":
48: resource "aws_lambda_function" "test_lambda" {
Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594
on config.tf line 67, in resource "aws_lambda_function" "praw_crawler":
67: resource "aws_lambda_function" "praw_crawler" {
It means the deploy file doesn't exist in s3 bucket. But it success in the second time when I run the command. It seems like a timing issue. After upload the zip file to s3 bucket, the zip file doesn't exist in s3 bucket. That's why the first time deploy failed. But after a few seconds later, the second command finishes successfully and very quick. Is there anything wrong in my configuration file?
The full terraform configuration file can be found: https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf
You need to add dependency properly to achieve this, Otherwise, it will crash.
First Zip the files
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
}
then upload it s3 by specifying it dependency which zip,source = "${data.archive_file.source.output_path}" this will make it dependent on zip
# upload zip to s3 and then update lamda function from s3
resource "aws_s3_bucket_object" "file_upload" {
bucket = "${aws_s3_bucket.bucket.id}"
key = "lambda-functions/loadbalancer-to-es.zip"
source = "${data.archive_file.source.output_path}" # its mean it depended on zip
}
Then you are good to go to deploy Lambda, To make it depened just this line do the magic s3_key = "${aws_s3_bucket_object.file_upload.key}"
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
function_name = "alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
s3_bucket = "${var.env_prefix_name}${var.s3_suffix}"
s3_key = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
}
You may find that the source_code_hash changes even when the code hasn't changed when using Terraform's archive_file. If this is an issue for you I created a module to fix this: lambda-python-archive.
This is a response to the top answer:
You need to add .output_base64sha256 to the source_code_hash instead of using base64sha256 or else terraform plan never settles with "no changes / up-to-date" message.
For example:
source_code_hash = "${data.archive_file.source.output_bash64sha256}"
I have created userpool and trying to migrate user from RDS which invokes lambda function that returns the updated event object. but its not working for me.
I have followed as provided solution by removing below 2 fields, still not working .. :(
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
Here is the response object that am sending from lambda. still facing same issue - Exception during user migration
Please let me know what am missing here. Thanks in advance
def lambda_handler(event, context):
print event
event["response"] = {
"userAttributes": {
"email": event["userName"],
"email_verified": "true",
},
"finalUserStatus": "CONFIRMED",
"messageAction": "SUPPRESS",
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
}
print event
return event
I was having this problem, and I overcame it by increasing the memory allocated to the lambda from the default 128MB to 1024MB. I am using cdk to deploy, so I did this in the lamdba creation:
const nodeUserMigration = new NodejsFunction(this, 'myLambdaName', {
entry: path.join(
__dirname,
'userMigration.ts'
),
runtime: Runtime.NODEJS_18_X,
timeout: Duration.minutes(5),
memorySize: 1024, // This is what I added to overcome the `UserNotFoundException: Exception migrating user in app client (redactedClientId)`
environment: {
// redacted environment variables
},
});
Instead of
return event
You need
context.succeed(event)
It is probably possible to use return event directly; however, there would be other properties required to get Cognito to recognize it (things such as isBase64Encoded) and I don't know what they might be. Neither does Amazon have any documentation on them.
Oh, and desiredDeliveryMediums should be an array of strings.
I have built a Java Restful API and I want to access it from Angular 5. For development i have angular server running on port 4200 while my backend runs on 8080. I am trying to setup proxy in angular in order to communicate with backend.
So i have created the following proxy.conf.json file:
{
"/**": {
"target": "http://localhost:8080",
"changeOrigin": true,
"secure": false,
"logLevel": "debug"
}
}
Then i run in terminal
ng serve --proxy-conf proxy.conf.json
and i get the message
Proxy created: /** -> http://localhost:8080
[HPM] Subscribed to http-proxy events: [ 'error', 'close' ]
So till now everything is fine.
When i type in browser url for example
http://localhost:4200/incidents/all
everything works fine and i get the response
[{"id":55,"protocolNo":121212222,"date":1525122000000,"isPayed":true,"yliko":"fdasdfasfsaf","makro":"fdsa","anoso":"fdsa","mikro":"fdsafds","symperasma":"dfsdfsadf","klinikesPlirofories":"fsdafds","histo":"dfsadffga","simpliromatikiEkthesi":"fdsadfsdfa","patient":{"id":1,"firstName":"Μιχαήλ","lastName":"Τουτουδάκης","birthday":"1975-08-19","fatherName":"Δημήτριος","telephone":"6948571893","email":"mixtou#gmail.com","sex":true,"city":{"id":1,"name":"Χανιά"}},"doctor":{"id":1,"firstName":"όνομα","lastName":"επίθετο","telephone":"2821074737","email":"papa#papa.com","fatherName":"πατέρας","specialty":{"id":1,"name":"Αγγειοχειρουργική"},"city":{"id":1,"name":"Χανιά"}},"clinic":{"id":1,"name":"Κλινική Τσεπέτη","telephone":"123456789","address":"Παπαναστασίου","email":"test#test.com"},"signingDoctor":{"id":1,"lastName":"Δασκαλάκη","firstName":"Άννα"},"mikroskopikaSymperasma":null,"anosoEkthesi":null,"cancer":true},{"id":56,"protocolNo":11111,"date":1525640400000,"isPayed":false,"yliko":"dfsadfs","makro":"dfsfsdfdsdfsadfsa","anoso":"dfsdfsfdssfafasd","mikro":"dsfdfsadfsadfssdf","symperasma":"σδγσδγσαδγσγσασδγ","klinikesPlirofories":"δδφφγφγαφαγ","histo":"fdsasdfd","simpliromatikiEkthesi":"γαγσαδσγσδγσδαγ","patient":{"id":5,"firstName":"Στέφανος","lastName":"Μαριόλος","birthday":"2018-03-26","fatherName":"","telephone":"4838583845","email":"","sex":true,"city":{"id":2,"name":"Ρέθυμνο"}},"doctor":{"id":3,"firstName":"Χαράλαμπος","lastName":"Πρωτοπαπαδάκης","telephone":"4343454345","email":"","fatherName":"","specialty":{"id":29,"name":"Πνευμονολογία - Φυματιολογία"},"city":{"id":1,"name":"Χανιά"}},"clinic":{"id":1,"name":"Κλινική Τσεπέτη","telephone":"123456789","address":"Παπαναστασίου","email":"test#test.com"},"signingDoctor":{"id":1,"lastName":"Δασκαλάκη","firstName":"Άννα"},"mikroskopikaSymperasma":"γσαγδσασαδγδασγαγσγσ","anosoEkthesi":"φδαφγφγφδσγ","cancer":true}]
However visiting backend end point from angular i get nothing.
For incidents for example i have the following service:
getIncidents(): Observable<Incident[]> {
console.log('getting incidents');
const incidentsUrl = '/incidents/all';
return this.http.get<Incident[]>(incidentsUrl)
.pipe(catchError(ErrorHandler.handleError));
}
Which generates the following error in javascript console:
core.js:1440 ERROR Error: Uncaught (in promise): TypeError: Cannot read property 'push' of undefined
TypeError: Cannot read property 'push' of undefined
At the following line:
this.subscriptions.push(this.incidentsService.getIncidents().subscribe((results) => {
this.comService.sendIncidents(results);
this.spinner.hide();
}));
Which means that this.incidentsService.getIncidents() retturns nothing??
Any Ideas??
I removed the this.subscriptions.push(...) action and everything works fine. Why is this happening when using subscriptions. I need them because onDestroy i call subscription.unsubsribe() to avoid memory leaks.
Note that this happens when using proxy. If i don't use proxy and deploy the project to tomcat everything works ok. I don't get null inside push().
Any Ideas???
The problem was at the declaration of subscriptions. In the beginning it was:
subscriptions: Subscription[];
I changed it to
subscriptions: Subscription[] = [];
and everything worked fine.
subscriptions
wasn't initialized.
I am having trouble trying to create s3 event notifications. Does anyone know the resolutions to this?
Error is:
*Error applying plan:
1 error(s) occurred:
* module.Test-S3-Bucket.aws_s3_bucket_notification.s3-notification: 1 error(s) occurred:
* aws_s3_bucket_notification.s3-notification: Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
status code: 400, request id: AD9B5BF2FF84A6CB, host id: ShUVJ+TdkpqAZfpeDM3grkF9Vue3Q/AF0LydchperKTF6XdQyDM6BisZi/38pGAh/ZqS+gNyrSM=*
Below is the code that gives me the error:
resource "aws_s3_bucket" "s3-bucket" {
bucket = "${var.bucket_name}"
acl = ""
lifecycle_rule {
enabled = true
prefix = ""
expiration {
days = 45
}
}
tags {
CostC = "${var.tag}"
}
}
resource "aws_s3_bucket_notification" "s3-notification" {
bucket = "${var.bucket_name}"
topic {
topic_arn = "arn:aws:sns:us-east-1:1223445555:Test"
events = [ "s3:ObjectCreated:*", "s3:ObjectRemoved:*" ]
filter_prefix = "test1/"
}
}
If you haven't done it already, you need to specify a policy on the topic that grants the SNS:Publish permission to S3 (only from the bucket specified in the Condition attribute) - if you are also provisioning the topic via Terraform then something like this should do it (we know, as it caught us out just a few days ago too!):
resource "aws_sns_topic" "my-sns-topic" {
name = "Test"
policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-1:1223445555:Test",
"Condition":{
"ArnLike":{"aws:SourceArn":"${aws_s3_bucket.s3-bucket.arn}"}
}
}]
}
POLICY
}
Hope that helps.
Well, I know that this is not your exact case, but I had the same error and I didn't manage to find an answer here, and because this post is the first that Google gave me, I will leave the answer to my case here in the hope that it will help someone else.
So, I notice that after Terraform apply I had this error and I went to the UI to see what happened and found this message:
The Lambda console can't validate one or more event sources for this trigger. The most common cause is when a source ARN includes a wildcard (*) character. You can manage unvalidated triggers using the AWS CLI or AWS SDK.
And guess what? I really had a wildcard (*) character in ARN like this:
source_arn = "{aws_s3_bucket.bucket.arn}/*"
So I changed it to:
source_arn = aws_s3_bucket.bucket.arn
And it worked. So, if you read this - there might be the same mistake in your case.