serverless-s3-local writing to real S3 bucket - amazon-s3

I am using Serverless framework with the serverless-s3-local plugin to test my code during development. However, despite being in offline mode, the real S3 bucket is being written to. How can I alter my configuration to use a local fake s3 bucket when in offline mode?
Relevant serverless.yml sections:
plugins:
- serverless-stack-output
- serverless-plugin-include-dependencies
- serverless-layers
- serverless-deployment-bucket
- serverless-s3-local
- serverless-offline
custom:
#...
s3:
bucketName: test-s3-buck
host: localhost
serverless-offline:
ignoreJWTSignature: true
httpPort: 4000
noAuth: true
directory: /tmp
resources:
Resources:
#...
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.s3.bucketName}
Endpoint Calling S3:
import boto3
def post(event, context):
s3_path = "/test.txt"
body = "test"
encoded_string = body.encode("utf-8")
s3 = boto3.resource("s3")
bucket_name = "test-s3-buck"
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=encoded_string)
response = {
"statusCode": 200,
"body": "Created."
}
return response
Launching Serverless Offline:
serverless offline start

on the readme file in serverless-s3-local we have:
const S3 = new AWS.S3({
s3ForcePathStyle: true,
accessKeyId: 'S3RVER', // This specific key is required when working offline
secretAccessKey: 'S3RVER',
endpoint: new AWS.Endpoint('http://localhost:4569'),
});
you can achieve the same with boto:
import boto3
client = boto3.client(
's3',
aws_access_key_id='S3RVER',
aws_secret_access_key='S3RVER'
)
which means, when you run your serverless offline start you need to set the aws access key id to S3RVER and aws secret access key to S3RVER, otherwise, the real bucket will be used.
also in the readme, there's instructions to setup a s3local aws profile, https://github.com/ar90n/serverless-s3-local#triggering-aws-events-offline
another way to achieve it is to run your command with environment variables:
AWS_ACCESS_KEY_ID=S3RVER AWS_SECRET_ACCESS_KEY=S3RVER serverless offline start
in that way, the aws-sdk inside your code will read the correct values for the offline mode

Related

When deploying new AWS S3 Buckets via Terraform I receive an error that resource already exists

I've created a little test environment, using Gitlab and Terraform to deploy infrastructure to AWS. I'm still very new to this so apologies if this is a stupid question.
I have a file called s3.tf, I created a new AWS bucket in there with no issues at all, pushed the branch to Gitlab and merged it, the pipeline ran and Terraform deployed the new S3 bucket to AWS.
Now, I wanted to test creating another S3 bucket, so I duplicated the code in s3.tf and just adjusted the name of the bucket etc. Now when I merge this pushrequest in Gitlab the pipeline still succeeds and the new bucket was created, but I get this warning/error:
│ Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
│ status code: 409, request id: TEZR4YQMYQPF6QYD, host id: HXt04y7lxANMaIp94g5rnovGwHduElooxrMGDMCdIfuswmtBAsmRdah3Rkx5cBkzaMfRwbid3l6sGHBPoOG7ew==
│
│ with aws_s3_bucket.terraform-state-storage-s3-witcher,
│ on s3.tf line 1, in resource "aws_s3_bucket" "terraform-state-storage-s3-witcher":
│ 1: resource "aws_s3_bucket" "terraform-state-storage-s3-witcher" {
Is it expected that I will see this error every single time I deploy a new S3 bucket? Or can I adjust where the terraform apply runs from that s3.tf file?
Thanks!
(Current contents of s3.tf):
resource "aws_s3_bucket" "terraform-state-storage-s3-witcher" {
bucket = "gb-terraform-state-s3-witcher"
versioning {
# enable with caution, makes deleting S3 buckets tricky
enabled = false
}
lifecycle {
prevent_destroy = false
}
tags = {
name = "S3 Remote Terraform State Store"
}
}
resource "aws_s3_bucket" "terraform-state-storage-s3-witcherv2" {
bucket = "gb-terraform-state-s3-witcherv2"
versioning {
# enable with caution, makes deleting S3 buckets tricky
enabled = false
}
lifecycle {
prevent_destroy = false
}
tags = {
name = "S3 Remote Terraform State Store"
}
}

Terraform init Error: Failed to get existing workspaces: S3 bucket does not exist

Hi i have an issue with terraform not being able to see the s3 bucket when i specify it as a backend
aws --profile terraform s3api create-bucket --bucket "some_name_here" --region "eu-west-2" \
--create-bucket-configuration LocationConstraint="eu-west-2"
terraform init
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: S3 bucket does not exist.
The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.
Error: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: QYJT8KP0W4TM986A, host id: a7R1EOOnIhP6YzDcKd66zdyCJ8wk6lVom/tohsc0ipUe5yEJK1/V4bLGX9khi4q4/J7d4BgYXCc=
backend.tf
terraform {
backend "s3" {
bucket = "some_name_here"
key = "networking/terraform.tfstate"
region = "eu-west-2"
}
}
provider.tf
provider "aws" {
region = "eu-west-2"
shared_credentials_file = "$HOME/.aws/credentials"
profile = "terraform"
}
I can see the bucket in the dashboard
It looks like you are using a profile in the command to create the bucket. Therefore, you probably need to export a variable in the environment running terraform to use this same profile. I imagine terraform without this profile or another with sufficient permissions is unable to read from the bucket.
export AWS_PROFILE=terraform
terraform init
Alternatively, you can pass the profile into the backend configuration, like:
terraform {
backend "s3" {
bucket = "some_name_here"
key = "networking/terraform.tfstate"
profile = "terraform"
region = "eu-west-2"
}
}
To summarize, the most simple configuration is:
terraform {
backend "s3" {
bucket = "some_name_here"
key = "networking/terraform.tfstate"
region = "eu-west-2"
}
}
provider "aws" {
region = "eu-west-2"
}
then:
export AWS_PROFILE=terraform
aws s3api create-bucket --bucket "some_name_here" --region "eu-west-2" --create-bucket-configuration LocationConstraint="eu-west-2"
terraform init

How do you obtain an aws-iam-token to access S3 using IRSA?

I've create an IRSA role in terraform so that the associated service account can be used by a K8s job to access an S3 bucket but I keep getting an AccessDenied error within the job.
I first enabled IRSA in our EKS cluster with enable_irsa = true in our eks module.
I then created a simple aws_iam_policy as:
resource "aws_iam_policy" "eks_s3_access_policy" {
name = "eks_s3_access_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:*",
]
Effect = "Allow"
Resource = "arn:aws:s3:::*"
},
]
})
}
and a iam-assumable-role-with-oidc:
module "iam_assumable_role_with_oidc_for_s3_access" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "~> 3.0"
create_role = true
role_name = "eks-s3-access"
role_description = "Role to access s3 bucket"
tags = { Role = "eks_s3_access_policy" }
provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
role_policy_arns = [aws_iam_policy.eks_s3_access_policy.arn]
number_of_role_policy_arns = 1
oidc_fully_qualified_subjects = ["system:serviceaccount:default:my-user"]
}
I created a K8s service account using Helm like:
Name: my-user
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::111111:role/eks-s3-access
meta.helm.sh/release-name: XXXX
meta.helm.sh/release-namespace: default
Image pull secrets: <none>
Mountable secrets: my-user-token-kwwpq
Tokens: my-user-token-kwwpq
Events: <none>
Finally, jobs are created using the K8s API from a job template:
apiVersion: batch/v1
kind: Job
metadata:
name: job
namespace: default
spec:
template:
spec:
serviceAccountName: my-user
containers:
- name: {{ .Chart.Name }}
env:
- name: AWS_ROLE_ARN
value: arn:aws:iam::746181457053:role/eks-s3-access
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
volumeMounts:
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
volumes:
- name: aws-iam-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
When the job attempts to get the specified credentials, however, the specified token is not there:
2021-08-03 18:02:41 Refreshing temporary credentials failed during mandatory refresh period.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 291, in _protected_refresh
metadata = await self._refresh_using()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 345, in fetch_credentials
return await self._get_cached_credentials()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 355, in _get_cached_credentials
response = await self._get_credentials()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 410, in _get_credentials
kwargs = self._assume_role_kwargs()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 420, in _assume_role_kwargs
identity_token = self._web_identity_token_loader()
File "/usr/local/lib/python3.7/site-packages/botocore/utils.py", line 2365, in __call__
with self._open(self._web_identity_token_path) as token_file:
FileNotFoundError: [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token'
From what is described in https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/ a webhook typically creates these credentials when the pod is created. However, since we're creating the new k8s' job on demand within the k8s cluster, I suspect that the webhook is not creating any such credentials.
How can I request the correct credentials to be created from within a K8s cluster? Is there a way to instantiate the webhook from within the cluster?
There are a couple of things that could cause this to fail.
Check all settings for the IRSA role. For the trust relationship setting check the name of the namespace and the name of service account are correct. Only if these settings match the role can be assumed.
While the pod is running try to access the pod with a shell. Check the content of the "AWS_*" environment variables. Check AWS_ROLE_ARN points to the correct role. Check, if the file which AWS_WEB_IDENTITY_TOKEN_FILE points to, is in its place and it is readable. Just try to do a cat on the file to see if it is readable.
If you are running your pod a non-root (which is recommended for security reasons) make sure the user who is running the pod has access to the file. If not, adjust the securityContext for the pod. Maybe the setting of fsGroup can help here. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
Make sure the SDK your pos is using supports IRSA. If you are using older SDKs IRSA may not be supported. Look into the IRSA documentation for supported SDK versions. https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html

Access denied for s3 bucket for terraform backend

My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG

Amazon S3 : The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 [duplicate]

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.
AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.
With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );
You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.
For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)
I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..
Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')
For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful
Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here
For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});
Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}
In my case, the request type was wrong. I was using GET(dumb) It must be PUT.
Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False
Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py
For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)
Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});
I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project
Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);
Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image
Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();