public s3 objects with terraform - amazon-s3

I've been attempting to recreate an existing infrastructure using Terraform and one of the required services is an S3 bucket which should contain publicly accessible images.
here is the terraform code for the bucket:
resource "aws_s3_bucket" "foo_icons" {
bucket = join("-", [local.prefix, "foo", "icons"])
tags = {
Name = join("-", [local.prefix, "foo", "icons"])
Environment = var.environment
}
}
resource "aws_s3_bucket_acl" "icons_bucket_acl" {
bucket = aws_s3_bucket.foo_icons.id
acl = "public-read"
}
the bucket is populated as follows
resource "aws_s3_object" "icon_repository_files" {
for_each = fileset("../files/icon-repository/", "**")
bucket = aws_s3_bucket.foo_icons.id
key = each.value
source = "../files/icon-repository/${each.value}"
etag = filemd5("../files/icon-repository/${each.value}")
}
The result I can see on the console is that the bucket is in fact publicly accessible, but that each object in the bucket is not public according to the ACL shown. I also can't reach the s3 objects with the displayed url; this results in access denied.
So, I guess the question is what is the best way to create a bucket with publicly accessible objects in Terraform?
Thanks in advance.
PS. I read that ACL is no longer "modern" so if there is a better approach to achieve this, I'd be happy to hear it.

Related

Terraform tfstate s3 not creating

I am trying to set up remote backend for my terraform workflow. My backend block is as follows
terraform {
backend "s3" {
bucket = "terraform-aws-007"
key = "global/bananadev/s3/terraform.tfstate"
region = "eu-west-2"
}
}
enter image description here
Terraform initialization is successful, however the state file is not being created in my s3 bucket but locally.
Any ideas what may be wrong?

Can I version control CloudFlare configuration?

I am utilizing Cloudflare for a public website. Recently, I have been adjusting many different configuration values via the website/UI. Is there a way to download/upload the configuration so that it can be version-controlled?
You can configure Cloudflare using Terraform. Check out Terraform Cloudflare Provider here.
You can use a tool called cf-terraforming delivered by Cloudflare that allows to download the Cloudflare setup into Terraform.
Here is a quick demo of what it would look like using Terraform:
locals {
domain = "example.com"
hostname = "example.com" # TODO: Varies by environment
}
variable "CLOUDFLARE_ACCOUNT_ID" {}
variable "CLOUDFLARE_API_TOKEN" { sensitive = true }
provider "cloudflare" {
account_id = var.CLOUDFLARE_ACCOUNT_ID
api_token = var.CLOUDFLARE_API_TOKEN
}
resource "cloudflare_zone" "default" {
zone = local.domain
plan = "free"
}
resource "cloudflare_record" "a" {
zone_id = cloudflare_zone.default.id
name = local.hostname
value = "192.0.2.1"
type = "A"
ttl = 1
proxied = true
}
Find a complete example, see Terraform Starter Kit:
infra/dns-zone.tf
infra/dns-records.tf
It works especially great with Terraform Cloud "backend" which provides a free account.

How to save and get plain text data in amazon aws s3 bucket using asp.net mvc?

I am trying to save plain text data at AWS S3 bucket using ASP.NET MVC can you help to achieve this ??
Save and GET data in aws s3 bucket in asp.net mvc :-
To save plain text data at amazon s3 bucket.
1.First you need a bucket created on aws than
2.You need your aws credentials like a)aws key b) aws secretkey c) region
// code to save data at aws // Note you can get access denied error. to remove this please check AWS account and give //read and write rights
Name space need to add from NuGet package
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
try`
{
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
// simple object put
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "put your plain text here",
ContentType = "text/plain",
BucketName = "put your bucket name here",
Key = "1"
//put unique key to uniquly idenitify your data
// you can pass here any data with unique id like primary key
//in db
};
PutObjectResponse response = client.PutObject(request);
}
catch(exception ex)
{
//
}
Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket. Note:- if you get any other issue please ask me a question here will try to resolve it.
To get data from AWS s3 bucket:-
try
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
GetObjectRequest request = new GetObjectRequest()
{
BucketName = bucketName,
Key = "1"// because we pass 1 as unique key while save
//data at the s3 bucket
};
using (GetObjectResponse response = client.GetObject(request))
{
StreamReader reader = new
StreamReader(response.ResponseStream);
vccEncryptedData = reader.ReadToEnd();
}
}
catch (AmazonS3Exception)
{
throw;
}

How to download a file using from s3 private bucket without AWS cli

Is it possible to download a file from AWS s3 without AWS cli? In my production server I would need to download a config file which is in S3 bucket.
I was thinking of having Amazon Systems Manger run a script that would download the config (YAML files) from the S3. But we do not want to install AWS cli on the production machines. How can I go about this?
You would need some sort of program to call the Amazon S3 API to retrieve the object. For example, a PowerShell script (using AWS Tools for Windows PowerShell) or a Python script that uses the AWS SDK.
You could alternatively generate an Amazon S3 pre-signed URL, which would allow a private object to be downloaded from Amazon S3 via a normal HTTPS call (eg curl). This can be done easily using the AWS SDK for Python, or you could code it yourself without using libraries (it's a bit more complex).
In all examples above, you would need to provide the script/program with a set of IAM Credentials for authenticating with AWS.
Just adding notes for any C# code lovers to solve problem with .Net
Firstly write(C#) code to download private file as string
public string DownloadPrivateFileS3(string fileKey)
{
string accessKey = "YOURVALUE";
string accessSecret = "YOURVALUE";;
string bucket = "YOURVALUE";;
using (s3Client = new AmazonS3Client(accessKey, accessSecret, "YOURVALUE"))
{
var folderPath = "AppData/Websites/Cases";
var fileTransferUtility = new TransferUtility(s3Client);
Stream stream = fileTransferUtility.OpenStream(bucket, folderPath + "/" + fileKey);
using (var memoryStream = new MemoryStream())
{
stream.CopyTo(memoryStream);
var response = memoryStream.ToArray();
return Convert.ToBase64String(response);
}
return "";
}
}
Second Write JQuery Code to download string as Base64
function downloadPrivateFile() {
$.ajax({url: 'DownloadPrivateFileS3?fileName=' + fileName, success: function(result){
var link = this.document.createElement('a');
link.download = fileName;
link.href = "data:application/octet-stream;base64," + result;
this.document.body.appendChild(link);
link.click();
this.document.body.removeChild(link);
}});
}
Call downloadPrivateFile method from anywhere of HTML/C#/JQuery -
Enjoy Happy Coding and Solutions of Complex Problems

Terraform use backend on module

I need to create optimize the structure of terraform.
Have on root path variables which I imported like module
/variables.tf
variable "aws_profile" { default = "default" }
variable "aws_region" { default = "us-east-1" }
After have a module folder
/ec2_instance/main.tf
module "global_vars" {
source = "../"
}
provider "aws" {
region = module.global_vars.aws_region
profile = module.global_vars.aws_profile
}
terraform {
backend "s3" {
encrypt = true
bucket = "some_bucket"
key = "path_to_statefile/terraform.tfstate"
region = "region"
profile = "profile"
}
}
module "instances_cluster" {
some actions
}
It's working, but I need to move backend and provider part to main.tf on root folder and after include like the module.
How I can do this?
I have tried to create /main.tf on root folder with backend part, but they are not working and backed writing state files locally.
You'd have to a bit of refactoring but these are the steps I would take
Run terraform plan in root and ec2_instance modules to verify zero changes so refactoring can begin
Comment out the backend for ec2_instance/main.tf
Place the backend from ec2_instance/main.tf into root main.tf
In the root main.tf, make a reference to ec2_instance module
Run terraform plan in root module and note the creations and deletions
For each creations and deletion pair, create a terraform state mv statement and run each
Verify the terraform plan has zero changes