Terraform tfstate s3 not creating - amazon-s3

I am trying to set up remote backend for my terraform workflow. My backend block is as follows
terraform {
backend "s3" {
bucket = "terraform-aws-007"
key = "global/bananadev/s3/terraform.tfstate"
region = "eu-west-2"
}
}
enter image description here
Terraform initialization is successful, however the state file is not being created in my s3 bucket but locally.
Any ideas what may be wrong?

Related

How to add a behavior to an existing AWS cloudfront distribution for an API gateway using AWS CDK (typescript preferred)?

I am trying to implement a CDK project that will deploy a static website in an s3 bucket along with a CloudFront distribution. I also have an API gateway that I need to access via the same cloud-front URL. I am able to do this from the AWS Management console. But when I try to implement this using CDK, I am getting circular dependency errors.
const cdn = new cloudfront.Distribution(this, "websitecdn", {
defaultBehavior: {origin: new origins.S3Origin(s3_bucket)}
});
const api = new apigw.RestApi(this, 'someapi', {defaultCorsPreflightOptions: enableCors})
const loginApi = api.root.addResource('login', {defaultCorsPreflightOptions: enableCors})
loginApi.addMethod('POST', new apigw.LambdaIntegration(loginLambda, {
proxy: false,
integrationResponses: [LambdaIntegrationResponses]}),
{
methodResponses: [LambdaMethodResponses]
})
const apiOrigin = new origins.RestApiOrigin(api)
cdn.addBehavior("/prod/*",apiOrigin,{
allowedMethods: cloudfront.AllowedMethods.ALLOW_ALL,
cachePolicy: cloudfront.CachePolicy.CACHING_DISABLED,
viewerProtocolPolicy: ViewerProtocolPolicy.HTTPS_ONLY,
})
Everything works fine until I try to add the behavior for the API gateway in the CDN. But when I add that, it starts throwing circular dependency errors.
What I am trying to do using AWS CDK typescript:
deploy a static s3 website
create a CloudFront Distribution for this website -> let's call it cdn_x
deploy backend API (Lambda functions with API Gateway)
Add the API gateway URL as a behavior to cdn_x so that I can use the same URL for API calls as well (I do not have a custom domain)
I was expecting the deployment to go through fine as I was able to go it in the AWS management console (Web UI of AWS). But trying to do the same using AWS CDK throws circular dependency errors.
It is unclear from your example how the stacks and resources in your CDK project are created and related. I'm unable to use your code examples.
In the meantime, I created a TypeScript example using Multiple Behaviors in CloudFront with Amazon API Gateway under the /api/* path and a S3 bucket as default behavior to serve static assets /*
The final CDK structure is the following:
The CDK codebase uses multiple stacks:
cloudfront-stack.ts
rest-api-stack.ts
s3-stack.ts
waf-stack.ts
And resources are passed as references in bin/infra.ts
const app = new cdk.App();
const s3Stack = new S3Stack(app, "S3Stack");
const restApiStack = new RestApiStack(app, "RestApiStack");
const wafStack = new WafStack(app, "WafStack", {
restApi: restApiStack.restApi,
});
const cloudFrontStack = new CloudFrontStack(app, "CloudFrontStack", {
bucketAssets: s3Stack.bucketAssets,
restApi: restApiStack.restApi,
wafCloudFrontAclArn: wafStack.wafCloudFrontAclArn,
wafRestApiOriginVerifyHeader: wafStack.wafRestApiOriginVerifyHeader,
wafRestApiOriginVerifyHeaderValue: wafStack.wafRestApiOriginVerifyHeaderValue,
});
GitHub repository:
https://github.com/oieduardorabelo/cdk-cloudfront-behavior-api-gateway-waf-protection
I trust the example above will clarity some of your questions.

Apache Flink S3 file system credentials does not work

I am trying to read csv file from Amazon S3 and I need to set credential info at runtime.
But I cant pass the credentials checking.
Is there any alternative or any suggestion?
object AwsS3CSVTest {
def main(args: Array[String]): Unit = {
val conf = new Configuration();
conf.setString("fs.s3a.access.key", "***")
conf.setString("fs.s3a.secret.key", "***")
val env = ExecutionEnvironment.createLocalEnvironment(conf)
val datafile = env.readCsvFile("s3a://anybucket/anyfile.csv")
.ignoreFirstLine()
.fieldDelimiter(";")
.types(classOf[String], classOf[String], classOf[String], classOf[String], classOf[String], classOf[String])
datafile.print()
}
}
00:49:55.558|DEBUG| o.a.h.f.s.AWSCredentialProviderList No credentials from TemporaryAWSCredentialsProvider: org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: Session credentials in Hadoop configuration: No AWS Credentials
00:49:55.558|DEBUG| o.a.h.f.s.AWSCredentialProviderList No credentials from SimpleAWSCredentialsProvider: org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: SimpleAWSCredentialsProvider: No AWS credentials in the Hadoop configuration
00:49:55.558|DEBUG| o.a.h.f.s.AWSCredentialProviderList No credentials provided by EnvironmentVariableCredentialsProvider: com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
As explained on https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/s3/#configure-access-credentials you should use IAM or Access Keys which you configure in flink-conf.yaml. You can't set the credentials in code, because the S3 plugins are loaded via plugins.

public s3 objects with terraform

I've been attempting to recreate an existing infrastructure using Terraform and one of the required services is an S3 bucket which should contain publicly accessible images.
here is the terraform code for the bucket:
resource "aws_s3_bucket" "foo_icons" {
bucket = join("-", [local.prefix, "foo", "icons"])
tags = {
Name = join("-", [local.prefix, "foo", "icons"])
Environment = var.environment
}
}
resource "aws_s3_bucket_acl" "icons_bucket_acl" {
bucket = aws_s3_bucket.foo_icons.id
acl = "public-read"
}
the bucket is populated as follows
resource "aws_s3_object" "icon_repository_files" {
for_each = fileset("../files/icon-repository/", "**")
bucket = aws_s3_bucket.foo_icons.id
key = each.value
source = "../files/icon-repository/${each.value}"
etag = filemd5("../files/icon-repository/${each.value}")
}
The result I can see on the console is that the bucket is in fact publicly accessible, but that each object in the bucket is not public according to the ACL shown. I also can't reach the s3 objects with the displayed url; this results in access denied.
So, I guess the question is what is the best way to create a bucket with publicly accessible objects in Terraform?
Thanks in advance.
PS. I read that ACL is no longer "modern" so if there is a better approach to achieve this, I'd be happy to hear it.

Jmeter-How to copy files from one AWS S3 bucket to another bucket?

I have tar.zip files placed in newbucket of AWS S3 location. I have script which will cut the file and place it in another S3 bucket. Every time I need to upload the files from local to newbucket as JSSR preprocessor to upload the files from local. Can I do copy paste of file in S3 from one bucket to another bucket ?
I think the "official" way is to use AWS CLI in general and aws s3 sync command in particular:
aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET
The command can be kicked off either from JSR223 Sampler or from the OS Process Sampler
If you prefer doing this programmatically - check out Copy an Object Using the AWS SDK for Java article, the code snippet just in case:
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import java.io.IOException;
public class CopyObjectSingleOperation {
public static void main(String[] args) throws IOException {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String sourceKey = "*** Source object key *** ";
String destinationKey = "*** Destination object key ***";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Copy the object into a new object in the same bucket.
CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName, sourceKey, bucketName, destinationKey);
s3Client.copyObject(copyObjRequest);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Terraform use backend on module

I need to create optimize the structure of terraform.
Have on root path variables which I imported like module
/variables.tf
variable "aws_profile" { default = "default" }
variable "aws_region" { default = "us-east-1" }
After have a module folder
/ec2_instance/main.tf
module "global_vars" {
source = "../"
}
provider "aws" {
region = module.global_vars.aws_region
profile = module.global_vars.aws_profile
}
terraform {
backend "s3" {
encrypt = true
bucket = "some_bucket"
key = "path_to_statefile/terraform.tfstate"
region = "region"
profile = "profile"
}
}
module "instances_cluster" {
some actions
}
It's working, but I need to move backend and provider part to main.tf on root folder and after include like the module.
How I can do this?
I have tried to create /main.tf on root folder with backend part, but they are not working and backed writing state files locally.
You'd have to a bit of refactoring but these are the steps I would take
Run terraform plan in root and ec2_instance modules to verify zero changes so refactoring can begin
Comment out the backend for ec2_instance/main.tf
Place the backend from ec2_instance/main.tf into root main.tf
In the root main.tf, make a reference to ec2_instance module
Run terraform plan in root module and note the creations and deletions
For each creations and deletion pair, create a terraform state mv statement and run each
Verify the terraform plan has zero changes