We are not able to create neptune cluster using this python boto3 library, and boto3 function is given below
**Functions is:**
import boto3
client = boto3.client('neptune')
response = client.create_db_cluster(
AvailabilityZones=[
'us-west-2c', 'us-west-2b',
],
BackupRetentionPeriod=1,
DatabaseName='testdcluster',
DBClusterIdentifier='testdb',
DBClusterParameterGroupName='default.neptune1',
VpcSecurityGroupIds=[
'sg-xxxxxxxxx',
],
DBSubnetGroupName='profilex',
Engine='neptune',
EngineVersion='1.0.1.0',
Port=8182,
Tags=[
{
'Key': 'purpose',
'Value': 'test'
},
],
StorageEncrypted=False,
EnableIAMDatabaseAuthentication=False,
DeletionProtection=False,
SourceRegion='us-west-2'
)
Error message is also given below
**error message :**
when calling the CreateDBCluster operation: The parameter DatabaseName is not valid for engine: neptune
could you please help to fix this ?
Rather than using DatabaseName just use DBClusterIdentifier and that will become the name of your cluster. The DatabaseName parameter is not needed when creating a Neptune cluster.
Related
Using terraform scripts, I create a new EC2, add policy to access an S3 bucket, and supply a userdata script that runs aws s3 cp s3://bucket-name/file-name . to copy a file from that S3 bucket, among other commands.
In /var/log/cloud-init-output.log I see fatal error: Unable to locate credentials, presumably caused by executing aws s3 cp ... line. When I execute the same command manually on the EC2 after it's been created, it works fine (which means the EC2 policy for bucket access is correct).
Any ideas why the aws s3 cp command doesn't work during userdata execution but works when the EC2 is already created? Could it be that the S3 access policy is only applied to the EC2 after the EC2 has been fully created (and after userdata has been run)? What should be the correct workaround?
data "aws_iam_policy_document" "ec2_assume_role" {
statement {
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com",
]
}
}
}
resource "aws_iam_role" "broker" {
name = "${var.env}-broker-role"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
force_detach_policies = true
}
resource "aws_iam_instance_profile" "broker_instance_profile" {
name = "${var.env}-broker-instance-profile"
role = aws_iam_role.broker.name
}
resource "aws_iam_role_policy" "rabbitmq_ec2_access_to_s3_distro" {
name = "${env}-rabbitmq_ec2_access_to_s3_distro"
role = aws_iam_role.broker.id
policy = data.aws_iam_policy_document.rabbitmq_ec2_access_to_s3_distro.json
}
data "aws_iam_policy_document" "rabbitmq_ec2_access_to_s3_distro" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion"
]
resources = ["arn:aws:s3:::${var.distro_bucket}", "arn:aws:s3:::${var.distro_bucket}/*"]
}
}
resource "aws_instance" "rabbitmq_instance" {
iam_instance_profile = ${aws_iam_instance_profile.broker_instance_profile.name}
....
}
This sounds like a timing issue where cloud-init is executed before the EC2 profile is set/ready to use. In your cloud-init script, I would make a loop to run a particular AWS cli command or even use the metadata server to retrieve information about the IAM credentials of the EC2 instance.
As the documentation states, you receive the following response when querying the endpoint http://169.254.169.254/latest/meta-data/iam/security-credentials/iam_role_name:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
So your cloud-init/user-data script could wait until the Code attribute equals to Success and then proceed with the other operations.
I have an application that needs to be deployed on EKS, and I'm having trouble setting up an ingress ALB.
I am using the following as a sample for how this should be set up.
https://github.com/aws-samples/nexus-oss-on-aws/blob/d3a092d72041b65ca1c09d174818b513594d3e11/src/lib/sonatype-nexus3-stack.ts#L207-L242
It's in TypeScript and I'm converting it to Python. My code is as below.
from aws_cdk import (
Stack,
aws_eks as eks,
aws_ec2 as ec2,
aws_iam as iam,
Duration
)
from constructs import Construct
class TestStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
vpc = ec2.Vpc(self, "test-vpc",
vpc_name="test-vpc",
cidr="10.0.0.0/16"
)
eks_role = iam.Role(
self, 'test-eks-role',
role_name = 'test-eks-role',
assumed_by=iam.CompositePrincipal(
iam.ServicePrincipal('eks.amazonaws.com')
),
managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name('AmazonEKSClusterPolicy')],
)
cluster = eks.Cluster(
self, "test-cluster",
cluster_name="test-cluster",
masters_role=eks_role,
version=eks.KubernetesVersion.V1_21,
vpc=vpc,
vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)]
)
alb_service_account = cluster.add_service_account(
'test-cluster-service-account',
name='test-cluster-service-account'
)
import requests
alb_controller_url = 'https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json'
policy_json = requests.get(url=alb_controller_url).json()
for statement in policy_json['Statement']:
alb_service_account.add_to_principal_policy(iam.PolicyStatement.from_json(statement))
cluster.add_helm_chart(
'aws-load-balancer-controller-helm-chart',
chart='aws-load-balancer-controller',
repository='https://aws.github.io/eks-charts',
release='aws-load-balancer-controller',
version='1.4.1',
wait=True,
timeout=Duration.minutes(15),
values={
"clusterName": cluster.cluster_name,
"image": {
"repository": "602401143452.dkr.ecr.ap-southeast-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.1",
},
"serviceAccount": {
"create": False,
"name": alb_service_account.service_account_name,
},
},
)
Right now I'm getting the following cryptic error message.
Received response status [FAILED] from custom resource. Message returned: Error: b'Error: UPGRADE FAILED: another operation (i
nstall/upgrade/rollback) is in progress\n'
Any advice would be greatly appreciated!
There is an AlbController construct available in the CDK, you could try with that and see if that works for you.
I am actually using the construct myself but am facing the same error message. There is this GitHub issue regarding the Helm error itself, however the rollback solution mentioned there is not applicable for me, there appears to be no state of the Helm release despite the error. I have raised this as an issue on the CDK repo.
I have a Cloud Formation template to create a SQL DB in the RDS and want to enable Delayed_Durability feature by default in it by running this query:
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED;
Is there a way to run this query right after db instance is created through CF template?
My CF template looks like this:
"Type":"AWS::RDS::DBInstance",
"Properties":{
"AllocatedStorage":"200",
"AutoMinorVersionUpgrade":"false",
"BackupRetentionPeriod":"1",
"DBInstanceClass":"db.m4.large",
"DBInstanceIdentifier":"mydb",
"DBParameterGroupName": {
"Ref": "MyDBParameterGroup"
},
"DBSubnetGroupName":{
"Ref":"dbSubnetGroup"
},
"Engine":"sqlserver-web",
"EngineVersion":"13.00.4422.0.v1",
"LicenseModel":"license-included",
"MasterUsername":"prod_user",
"MasterUserPassword":{ "Ref" : "dbpass" },
"MonitoringInterval":"60",
"MonitoringRoleArn": {
"Fn::GetAtt": [
"RdsMontioringRole",
"Arn"
]
},
"PreferredBackupWindow":"09:39-10:09",
"PreferredMaintenanceWindow":"Sun:08:58-Sun:09:28",
"PubliclyAccessible": false,
"StorageType":"gp2",
"StorageEncrypted": true,
"VPCSecurityGroups":[
{
"Fn::ImportValue":{
"Fn::Sub":"${NetworkStackName}-RDSSecGrp"
}
}
],
"Tags":[
{
"Key":"Name",
"Value":"my-db"
}
]
}
}
Is there a way to run this query right after db instance is created through CF template?
Depends. If you want to do it from within CloudFormation (CFN) then sadly, you can't do this using plain CFN. To do it from CFN, you would have to develop a custom resource. The resource would be in the form of lambda function. You would pass the DB details to the function in your CFN, and it could run and execute your query. It could also return any results you want to your CFN for further use.
In contrast, if you create your CFN stack using AWS CLI or SDK, then once create-stack call is completed, you can run your query from bash or any programming language you use do deploy your stack.
I am new to AWS and trying to update a lambda function. The lambda function is initially created using cloud-formation template with s3key as name of my zip file present in s3 bucket.
"LambdaFunction":{
"Type" : "AWS::Lambda::Function",
"Properties" : {
"Code" : {
"S3Bucket" : {
"Ref":"myBucket"
},
"S3Key" : "lambdaFunction.zip"
},
"FunctionName" : "HandleUserRequests",
"Handler" : "index.handler",
"Role" : {"Fn::GetAtt" : ["LambdaIamRole", "Arn"] },
"Runtime" : "nodejs10.x",
Now I have updated the function in local and triggered CI/CD to upload the updated code zip onto S3 bucket.
I need to update my lambda function with this new zip upload from s3. Can you please guide, how does deployment work for lambda function ?
Ideally we wouldn’t want to deploy Lambda functions using CloudFormation. We should be using AWS Serverless Application Model (AWS SAM).
This enables us to write and place our code locally and when u build, package and deploy the template, our code is automatically placed in S3 and linked to the Lambda function.
I'm working on a Python 3 script designed to get S3 space utilization statistics from AWS CloudFront using the Boto3 library.
I started with the AWS CLI and found I could get what I'm after with a command like this:
aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2017-03-06T00:00:00Z --end-time 2017-03-07T00:00:00Z --statistics Average --unit Bytes --r
from datetime import datetime, timedelta
import boto3
seconds_in_one_day = 86400 # used for granularity
cloudwatch = boto3.client('cloudwatch')
response = cloudwatch.get_metric_statistics(
Namespace='AWS/S3',
Dimensions=[
{
'Name': 'BucketName',
'Value': 'foo-bar'
},
{
'Name': 'StorageType',
'Value': 'StandardStorage'
}
],
MetricName='BucketSizeBytes',
StartTime=datetime.now() - timedelta(days=7),
EndTime=datetime.now(),
Period=seconds_in_one_day,
Statistics=[
'Average'
],
Unit='Bytes'
)
print(response)
If I execute above code it is returning json output ,but I want to define function from cloudwatch on wards entire code ..and make it parameterized,but the problem is when i define a function
It returns. Error code saying that response variable not defined...
Pls suggest how to use that in function