500 Internal Server Error when uploading to an AWS S3 bucket from Nuxt - vue.js

I am trying to upload a file to AWS S3 using aws-sdk v3 from a Nuxt app's Vue Component.
Here's how I upload it.
<script>
export default {
...
methods: {
onSubmit(event) {
event.preventDefault()
this.addPhoto()
},
addPhoto() {
// Load the required clients and packages
const { CognitoIdentityClient } = require('#aws-sdk/client-cognito-identity')
const { fromCognitoIdentityPool } = require('#aws-sdk/credential-provider-cognito-identity')
const {
S3Client,
PutObjectCommand,
ListObjectsCommand,
DeleteObjectCommand,
} = require('#aws-sdk/client-s3')
const REGION = 'us-east-1' // REGION
const albumBucketName = 'samyojya-1'
const IdentityPoolId = 'XXXXXXX'
const s3 = new S3Client({
region: REGION,
credentials: {
accessKeyId: this.$config.CLIENT_ID,
secretAccessKey: this.$config.CLIENT_SECRET,
sessionToken: localStorage.getItem('accessToken'),
},
})
var file = this.formFields[0].fieldName
var fileName = this.formFields[0].fieldName.name
var photoKey = 'user-dp/' + fileName
var s3Response = s3.send(
new PutObjectCommand({
Bucket: albumBucketName,
Key: photoKey,
Body: file,
}),
)
s3Response
.then((response) => {
console.log('Successfully uploaded photo.' + JSON.stringify(response))
})
.catch((error) => {
console.log(
'There was an error uploading your photo: Error stacktrace' + JSON.stringify(error.message),
)
const { requestId, cfId, extendedRequestId } = error.$metadata
console.log({ requestId, cfId, extendedRequestId })
})
},
...
}
</script>
The issue now is that the browser complains about CORS.
This is my CORS configuration on AWS S3
I'm suspecting something while creating the upload request using SDK. (I'm open to use an API that is better than what I'm using).
Nuxt setting that allows CORS.
Something else on S3 CORS config at permissions
Network tab on chrome dev tools shows Internal Server Error (500) for prefetch. (Don't know why we see 2 entries here)
Appreciate any pointers on how to debug this.

I was having the same issue today. The S3 logs were saying it returned a 200 code response, but Chrome was seeing a 500 response. In Safari, the error showed up as:
received 'us-west-1'; expected 'eu-west-1'
Adding region: 'eu-west-1' (i.e. the region where the bucked was created)to the parameters when creating the S3 service solved the issue for me.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html#setting-region-constructor

In the bucket policy use this
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://example/*"
}
}
}
]}
and use the region of your bucket
const s3 = new aws.S3({
apiVersion: 'latest',
accessKeyId: process.env.AWS_ACCESS_KEY_ID_CUSTOM,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY_CUSTOM,
region: 'us-west-1',
})

I am having the same problem, but according to the docs you should be using Cognito Identity to access the bucket. Only in V3 for clients to be able to access the buckets from the browser you must use Cognito Identity to authenticate users in order to have access to bucket/object commands. Currently trying to implement, so I am not 100% how to do it just the process. Feel free to take a look. I hope this helps. ~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cognito SDK Link: | https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
| Example: | https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-browser-credentials-cognito.html

The error needs to be fixed on the backend, since it's CORS. It's clearly states a missing header of Access-Control-Allow-Origin.
So, checking it in the official AWS docs gives you the answer: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html

I was doing multiple things wrongly here. Every answer on this post helped me make a little progress while debugging. Can't thank you enough!
My bucket policy was not using role-based ALLOW/DENY that has to correspond to authenticated role on my cognito identity pool.
Needed to rightly configure the Authentication provider as Cognito Userpool.
Making sure the region is right. Cognito region could be different from S3 region.
Make sure CORS policy includes relevant information like "Access-Control-Allow-Origin".
Double check the token includes the right credentials. This comes very handy cognito decode-verify
Was stand-alone testing from the browser. But this is not a good approach. Use an API server to take the file and push to S3 from there.

Related

AWS S3 getBucketLogging fails when called from lambda function

I am trying in an AWS lambda to get the bucket logging settings for my buckets. For this I enumerate the buckets with S3.listBuckets() - which works just fine. I then iterate over the bucket names like this (Typescript):
const bucketNames = await getBucketNames() // <- works without problems
for (const bucketName of bucketNames) {
try {
console.log(`get logging for bucket ${bucketName}`) // <-- getting to this log
const bucketLogging: GetBucketLoggingOutput = await s3.getBucketLogging({
Bucket: bucketName,
ExpectedBucketOwner: accountId
}).promise()
// check logging setup and adjust if necessary
} catch (error) {
console.log(JSON.stringify(error))
}
}
The call to getBucketLogging() fails
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-19T11:16:26.671Z",
"requestId": "****",
"extendedRequestId": "****",
"statusCode": 403,
"retryable": false,
"retryDelay": 70.19937788683632
}
The accountId that is passed in is definitely right (it's optional anyway); the lambda is in the same account as the bucket owner (which is the sole condition described in the docs at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLogging-property).
When doing this call from a terminal CLI I have no problems to get results, only when running from a lambda.
What am I missing or overseeing?
You should make sure to attach the respective IAM permissions to your lambda function. Just because you have the s3:ListBuckets role doesn't mean that it is also permitted to perform the same for the BucketLogging information. Please refer to the following docs for more details on S3 IAM actions: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html

Why is Lambda#Edge not forwarding to the correct bucket?

I have 3 S3 buckets:
my-routing-test-ap-southeast-2
my-routing-test-eu-west-2
my-routing-test-us-east-1
They are all configured as a static website, with block all public access turned off and (example) this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Demo",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject","s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::my-routing-test-us-east-1/*"
}
]
}
I have configured a cloudfront distribution with one origin:
my-routing-test-us-east-1.s3.us-east-1.amazonaws.com
And a behaviour configured for the origin above and Legacy cache settings header option set with the CloudFront-Viewer-Country value.
I should point out here that the documentation for caching based on request header states:
Specify whether you want CloudFront to cache objects based on the values of specified headers:
Whitelist – CloudFront caches your objects based only on the values of the specified headers. Use Whitelist Headers to choose the headers that you want CloudFront to base caching on.
However, the Edit behaviour section of the CloudFront console shows the "Cache key and origin requests" options as:
Legacy cache settings > Headers > Include the following headers > CloudFront-Viewer-Country
Which, of course, does not appear to include the "Whitelist" option.
The distribution also has the Origin request set to the Lambda#Edge function (where the code is pulled from this documentation page):
'use strict';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const countryToRegion = {
'US': 'us-east-1',
'AU': 'ap-southeast-2',
'GB': 'eu-west-2'
};
if (request.headers['cloudfront-viewer-country']) {
const countryCode = request.headers['cloudfront-viewer-country'][0].value;
const region = countryToRegion[countryCode];
console.log('countryCode: '+countryCode+' region: '+region);
if (region) {
console.log('region: '+region);
request.origin.s3.region = region;
const domainName = `my-routing-test-${region}.s3.${region}.amazonaws.com`;
request.origin.s3.domainName = domainName;
console.log('request.origin.s3.domainName: '+domainName);
request.headers['host'] = [{ key: 'host', value: domainName }];
}
}
callback(null, request);
};
When I call the cloudfront URL to retrieve my test file for my region (eu-west-2) I see this in my region's log group:
countryCode: GB region: eu-west-2
region: eu-west-2
request.origin.s3.domainName: origin-routing-eu-west-2.s3.eu-west-2.amazonaws.com
But the file is always the same image served from the us-east-1 region. This should not be the case as each bucket contains a different image for each region.
What is missing or incorrect in this configuration?
It turns out that the "Cache key and origin expiry" object caching was set to the default (1 year) and when I tested the initial file retrieval that basically store in the cache the primary origin file which could not get replaced by the regional version. Check your cache settings, folks!

slingshot meteor s3 error

I’m afraid I don’t understand how this is supposed to work at all. How does slingshot know the address to find my s3 bucket? Is this completely determined by the access keys?
This is the code I have in my server/files.js:
var imageDetails = new Mongo.Collection('images');
Slingshot.fileRestrictions("myImageUploads", {
allowedFileTypes: ["image/png", "image/jpeg", "image/gif"],
maxSize: 2 * 1024 * 1024,
});
Slingshot.createDirective("myImageUploads", Slingshot.S3Storage, {
AWSAccessKeyId: "AWSAccessKeyId",
AWSSecretAccessKey: "AWSSecretAccessKey",
bucket: "mybucketname",
acl: "public-read",
region: "us-west-1",
authorize: function () {
if (!this.userId) {
var message = "Please login before posting images";
throw new Meteor.Error("Login Required", message);
}
return true;
},
key: function (file) {
var currentUserId = Meteor.user().emails[0].address;
return currentUserId + "/" + file.name;
}
});
And this is my settings.json file
{
"AWSAccessKeyId" : "my access key",
"AWSSecretAccessKey" : "my secret access key",
"AWSBucket" : "mybucketname"
}
I get this error in my browser:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://mybucketname.s3-us-west-1.amazonaws.com/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
But I have a CORS configuration in my theportdata bucket.
The first step, I guess, is there any way to check if my application is making contact at all with my s3 bucket? Like I said, I don’t really understand how slingshot finds the bucket.
SOLVED
Changed "region: us-west-1" to "region: us-west-2" and it works.
There is also no need for the AWSAccessKeyId and AWSSecretAccessKey, since slingshot finds this automatically from settings.json.
Apparently all that's needed for an address is the bucket name and the region.
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

Amazon S3 : The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 [duplicate]

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.
AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.
With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );
You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.
For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)
I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..
Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')
For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful
Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here
For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});
Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}
In my case, the request type was wrong. I was using GET(dumb) It must be PUT.
Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False
Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py
For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)
Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});
I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project
Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);
Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image
Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();

Fine Uploader S3 - Return Uploaded URL on Complete callback

How can i get the full image url of the uploaded image on my amazon s3 bucket with fine-uploader?
My javascript code is:
jQuery(document).ready(function () {
jQuery("#fine-uploader").fineUploaderS3({
debug: true,
request: {
endpoint: 'bucket.s3.amazonaws.com',
accessKey: 'xxxxxxxx'
},
signature: {
endpoint: 'end.php '
},
uploadSuccess: {
endpoint: 'success.php '
},
iframeSupport: {
localBlankPagePath: 'success.html '
},
retry: {
enableAuto: true
},
validation: {
allowedExtensions: ['
jpeg ', '
jpg ', '
png '],
sizeLimit: 1048576
}
}).on('
complete ', function (event, id, name, response) {
console.log(response.tempLink);
});
});
UPDATE
Following the s3 demo i am using the response.tempLink and just trying to log it to the console and then i will use it later on. The upload always works fine but my console returns an undefined response everytime.
From finding this q&a: having trouble displaying an image uploaded to Amazon s3 by fine-uploader
It seems like my IAM user/policy settings and $serverPublicKey and $serverPrivateKey might be the cause? My setup is:
Exact copy of this file for my end.php file:
https://github.com/Widen/fine-uploader-server/blob/master/php/s3/s3demo-cors.php
with the following changes:
// changed to match the secret access key for the FIRST IAM user as discussed in the docs
$clientPrivateKey = 'user_secret_key...';
// bucket name
$expectedBucketName = "my.bucket.name";
// changed to match the access and secret of the SECOND IAM 'server' user
$serverPublicKey = 'server_user_access_key...';
$serverPrivateKey = 'server user secret key...';
// updated to my website
function handlePreflightedRequest() {
header('Access-Control-Allow-Origin: http://www.mywebsite.com');
}
In my Amazon IAM console I have mt SECOND IAM 'server' user setup as:
Group:
grp-server
Group Policy: (is GetObject the correct action?)
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::my.bucket.name/*"
}]
}
or i've tried the following which gives full admin access just to check
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Resource":"*"
}]
}
User:
user-server
added to grp-server (which inherits group policy)
user-server access key becomes $serverPublicKey in end.php
user-server secret key becomes $serverPrivateKey in end.php
Am i missing anything from this?
Have your uploadSuccess.endpoint return a pre-signed URL that you then handle in the onComplete handler. Note that pre-signed URLs can only be generated server-side. See this PHP server for details.