I uploaded manually file to S3, added Metadata x-amz-meta-alt-name to this object.
Using AWS Javascript SDK I tried to get Metadata but got an empty object.
var params = {
Bucket: "mybucket",
Key: "myfile.txt"
};
s3.headObject(params, function(err, data) {
console.log(data.Metadata['x-amz-meta-alt-name']);
});
Output:
undefined
Do you have any ideas how to solve it?
Maybe I need to configure some policies.
I think you have to expose the value in CORS settings like this
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<ExposeHeader>x-amz-meta-description</ExposeHeader>
</CORSRule>
But I am not sure if you can get these values in the callback.
This thread will help you understanding what is possible and what not https://github.com/aws/aws-sdk-js/issues/232
Related
This is a Shopify shop pulling images from a public S3 bucket. A javascript function checks via AJAX if the images exist before putting them on an array to be used when rendering the product:
function set_gallery(sku) {
var bucket = 'https://[xbucket].s3.amazonaws.com/img/sku/';
var folder = sku.slice(0,4);
var nombre = sku.replace(' SG OPT ', '');
var nombre = nombre.replace(' ', '');
var idx='';
var ciclo = variant_gallery.attempts;
var fallos = variant_gallery.failed;
if (ciclo > 0) {
idx = '-'+ciclo;
}
var picURL = bucket+folder+'/'+nombre+idx+'.jpg';
$.ajax({
url:picURL,
type:'GET',
error: function()
{
fallos++;
ciclo++;
variant_gallery.failed = fallos;
variant_gallery.attempts = ciclo;
if ( fallos < 2 ) {
set_gallery(sku);
} else {
variant_gallery.isReady = true;
build_gallery();
}
},
success: function()
{
ciclo++;
variant_gallery.attempts = ciclo;
variant_gallery.gallery_urls.push(picURL);
if ( ciclo < 15 ) {
set_gallery(sku);
} else {
variant_gallery.isReady = true;
build_gallery();
}
}
});
}
This is how the Bucket Policy looks like...
{
"Version": "2012-10-17",
"Id": "Policy1600291283718",
"Statement": [
{
"Sid": "Stmt1600291XXXXXX",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[xbucket]/img/sku/*"
}
]
}
...and CORS Configuration...
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://shopifystore.myshopify.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>https://shopifystore.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
The problem is that, on Chrome, it renders as expected around 98% of the time (an error every 50 attempts), but in Safari I'm getting a CORS error about once every two or three attempts:
Origin https://shopifystore.com is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load https://[bucket].s3.amazonaws.com/img/sku/image-to-load.jpg due to access control checks.
What can I do to make it as consistent in Safari as it is from Chrome? Hopefully even more reliable than that.
I have already checked these other SO questions:
AWS S3 bucket: CORS Configuration
AWS S3 CORS Error: Not allowed access
Fix CORS "Response to preflight..." header not present with AWS API gateway and amplify
Chrome is ignoring Access-Control-Allow-Origin header and fails CORS with preflight error when calling AWS Lambda
Intermittent 403 CORS Errors (Access-Control-Allow-Origin) With Cloudfront Using Signed URLs To GET S3 Objects
Cross-origin requests AJAX requests to AWS S3 sometimes result in CORS error
Cached non CORS response conflicts with new CORS request
Some of those won't apply to this scenario. Some others I tried without success.
After reading several possible solutions I finally solved with a mix of those. It turns out that this was a cache problem as illustrated here:
Cross-origin requests AJAX requests to AWS S3 sometimes result in CORS error
Cached non CORS response conflicts with new CORS request
I tried that solution first but didn't implemented right with Jquery and I just took another way.
Then tried a few hours later with this solution to avoid cache on jQuery AJAX:
How to prevent a jQuery Ajax request from caching in Internet Explorer?
Finally only added one line of code and got it solved:
$.ajax({
url:picURL,
type:'GET',
cache: false, // <- do this to avoid CORS on AWS S3
error: function()
{
...
}
Slingshot package is used with Meteor to upload images to S3 directly from the client. Same code that I've used in other projects approved to be working. Even at my local setup, I can upload images to cloud, but not with its deployed version, which is identical. The error is as follows:
Failed to upload file to cloud storage [Bad Request - 400]
the region 'us-east-1' is wrong; expecting 'eu-central-1'
(but it doesn't tell where...)
Any ideas?
This is the initialisation of the Meteor Slingshot directive:
const s3Settings = Meteor.settings.private.S3settings;
Slingshot.createDirective("userProfileImages", Slingshot.S3Storage, {
AWSAccessKeyId: s3Settings.AWSAccessKeyId,
AWSSecretAccessKey: s3Settings.AWSSecretAccessKey,
bucket: s3Settings.AWSBucket,
region: s3Settings.AWSRegion,
acl: "public-read",
authorize: function () {
if (!this.userId) {
const message = "Please login before posting images";
throw new Meteor.Error("Login Required", message);
}
return true;
},
key: function (file) {
const user = Meteor.users.findOne(this.userId);
return user.username + "/" + file.name;
}
});
This is my Amazon S3 CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>10000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I have no bucket policy.
Access control is all public.
Help appreciated.
The problem was me. I defined the region in my settings as AWSregion (r), whereas I called it AWSRegion (R) in my code to setup. So it was undefined and didn't work.
The solution is to make sure cases are typed right.
I'm using AWS S3 and I've configured my Bucket to use CORS:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I'm requesting SVG images from the Bucket, in a client-side React application. I'm rendering them inline so the response needs to have CORS headers enabled. Sometimes this works, and sometimes it doesn't. I can't isolate exactly what is causing the issue. I was retrieving one image fine; then I uploaded a new image to the bucket, and that image, once downloaded, was giving me the error:
XMLHttpRequest cannot load https://s3.amazonaws.com/.../example.svg. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.
I've tried adding <AllowedHeader>*</AllowedHeader> and <ExposeHeader>ETAG</ExposeHeader>, and clearing my cache with every change, to no effect. I'm confused. Why aren't the headers coming through?
It doesn't always return CORs headers, it seems that you need to provide a origin header and you don't always do so.
To make it consistent and always return the CORs headers you need to add one lambda function:
'use strict';
// If the response lacks a Vary: header, fix it in a CloudFront Origin Response trigger.
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
if (!headers['vary'])
{
headers['vary'] = [
{ key: 'Vary', value: 'Access-Control-Request-Headers' },
{ key: 'Vary', value: 'Access-Control-Request-Method' },
{ key: 'Vary', value: 'Origin' },
];
}
callback(null, response);
};
See the full answer and more details here: https://serverfault.com/a/856948/46223
In threejs, I load an image from amazon s3 using the following code:
var loader = new THREE.TextureLoader();
loader.setCrossOrigin('');
loader.load(image_url,
function ( texture ) {
// do something with the texture
var sphere = new THREE.Mesh(
new THREE.SphereGeometry(radius, 20, 20),
new THREE.MeshBasicMaterial({
map: texture
})
);
sphere.scale.x = -1;
scene.add(sphere);
},
// Function called when download progresses
function ( xhr ) {
// console.log( (xhr.loaded / xhr.total * 100) + '% loaded' );
},
// Function called when download errors
function ( xhr ) {
console.log( 'An error happened' );
}
);
On amazon s3 bucket, i also configured CORS which enable all the origins to send cross-origin requests
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
According to some awnsers from similar question, I just need to set loader.setCrossOrigin('');
But it does not work. Error of CORS still occurs.
XMLHttpRequest cannot load https://s3-ap-southeast-1.amazonaws.com/.../test_image.jpg
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost' is therefore not allowed access.
I'm using the latest version of ThreeJS.
Anyone can help? Thanks for all support.
[Fixed]
It seems my photos uploaded before on AS3 are still affected by old CORS setting which does not allow to send cross origin request.
So, I remove old photos -> upload new and problem is solved.
Open your config.php and find your base url...
if you write like this http://yoururl
change it like this http://www.yoururl.com
Hope its work this simple solution!! ;)
I am using Amazon S3 as backend. I have the bucket correctly configured to allow CORS to anything from my domain. I have tested that it works for regular files (ie. uploaded via the Amazon AWS console or with the S3 command line tools).
My app also uploads JSON files itself to the S3 bucket. Interestingly, it needs CORS correctly configured for the upload to succeed. It does and my JSON file is placed into the bucket.
The problem is, when I make a CORS GET request (jquery $.ajax) for these files I previously uploaded, the request fails with the typical message
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Please mind that with any other file in the same bucket, same path, that was not uploaded by the application, but from the console or comnmand line tools, the request succeeds.
Why is this happening?
My CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Somewhere in jQuery documentation you find a option for $.ajax.
jQuery.support.cors = true;
...
$.ajax(
url,
{
crossDomain: true,
data: {
sampleData
},
success: function() {
alert('Yeaaahhh')
},
error: function(a,b,c) {
alert('failed');
},
type: 'post'
}
);
But better you use a XMLHTTPRequest for that. Like:
var xhr = new XMLHTTPRequest;
xhr.open('POST', url);
xhr.onreadystatechange = function(state, status){
//do something
};
xhr.onload = function(event){
//do something
};
xhr.onerror = function(event){
//do something
};
xhr.send(data);
Greets.