Fine Uploader cannot draw thumbnail from amazon S3 - amazon-s3

I have a form with a Fine Uploader and I am loading an initial file list (as described here)
For the list of initial files I am also returning the thumbnailUrl which points to my files in Amazon's S3.
Now I see that Fine Uploader is actually making an HTTP request to S3 and gets a 200 OK but the thumbnail is not displayed and this is what I see in the console:
[Fine Uploader 5.1.3] Attempting to update thumbnail based on server response.
[Fine Uploader 5.1.3] Problem drawing thumbnail!
Response from my server:
{"name": 123, "uuid": "...", "thumbnailUrl": "...."}
Now Fine Uploader makes a GET request to S3 to the URL specified in the thumbnailUrl property. The request goes like this:
curl "HERE_IS_MY_URL" -H "Host: s3.eu-central-1.amazonaws.com" -H "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0" -H "Accept: image/png,image/;q=0.8,/*;q=0.5" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "DNT: 1" -H "Referer: http://localhost:9000/edititem/65" -H "Origin: http://localhost:9000" -H "Connection: keep-alive" -H "Cache-Control: max-age=0"
Response: 200 OK with Content-Type application/octet-stream
Is there any configuration option for Fine Uploader that I am missing? Could it be that this is a CORS-related issue?

Fine Uploader loads thumbnails at the URL returned by your initial file list endpoint using an ajax request (XMLHttpRequest) in modern browsers. It does this so it can scale and properly orient the image preview.
You'll need a CORS rule on your S3 bucket that allows JS access via a GET request. It will look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Of course, you may need to allow other origins/headers/methods depending on whatever else you are doing with S3.

Related

Download pdf files with curl using SOAP request

I am trying to download a pdf file using CURL command by sending HTTP POST request.
When I send my CURL request it download a pdf file but the file is not readable.
This the request I send: curl -H "Content-Type: text/xml; charset=UTF-8" -H "http://xxxxxxx/webservice/content/getContent:" -d #content_request.txt -X POST http://xxxxxx/xxxx/ContentService?wsdl/ -o sortieContent.pdf
(I replace the real adress by xxxxx for privacy reasons)
The thing is that even I download a pdf file it is not readable like it was corrupted.
What I understand is that curl answers with a file (cat of the content below) but there is different informations which are not the same format so the file received get corrupted.
--uuid:b47a2d96-bf98-4de9-99ae-9308d18ae599
Content-Id: rootpart*b47a2d96-bf98-4de9-99ae-9308d18ae599#example.jaxws.sun.com
Content-Type: application/xop+xml;charset=utf-8;type="text/xml"
Content-Transfer-Encoding: binary
79740application/pdfxxxxxxxFUOBqIAILPaDCmTvBRDXPhWnQQliV0ygEYrgPFVvDXw=
--uuid:b47a2d96-bf98-4de9-99ae-9308d18ae599
Content-Id: 5c3a7832-7ce4-4405-9cf6-20cb304972ca#example.jaxws.sun.com
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
%PDF-1.5
I tried replacing Content-Type: text/xml by Content-Type: application/pdf or Content-Type: application/octet-stream but it doesn't even download the content.
Is it possible to download only the pdf not the other informations so my file will be readable? How can I do it?
Thank you

Make cross domain request for JSON with new Fetch API

I'm having CORS issues with fetch and have exhausted google. In the codepen below, I'm trying to hit flickr's open api for some images. You'll see two buttons. "Search with jquery" works fine, using $.getJSON.
Of course, I'd like to use Fetch. "Search with Fetch" doesn't work. When I try to send the same request, I get this error:
Fetch API cannot load http://api.flickr.com/services/feeds/photos_public.gne?tags=dog&tagmode=any. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://s.codepen.io' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
console_runner-ba402f0a8d1d2ce5a72889b81a71a979.js:1 TypeError: Failed to fetch(…)
When I add mode: 'no-cors', then all I get back is an opaque response that doesn't contain any data.
Try for yourself! http://codepen.io/morgs32/pen/OMGEpm?editors=0110
Would love a hand. Thanks.
Looks like the Flickr API is using JSONP.
If you run curl 'http://api.flickr.com/services/feeds/photos_public.gne?tags=asd&tagmode=any&format=json' -H 'Host: api.flickr.com' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_; rv:47.0) Gecko/20100101 Firefox/47.0' -H 'Accept: */*' in your console, you don't get a JSON response:
jsonFlickrFeed({
"title": "Recent Uploads tagged asd",
"link": "http://www.flickr.com/photos/tags/asd/",
"description": "",
"modified": "2016-02-16T18:34:00Z",
"generator": "http://www.flickr.com/",
"items": [
{
...
})
JSONP is automatically supported by getJSON, but not by fetch.

My CORS rule doesn't fix my CORS error

I have some CORS rules on my S3 bucket.
This is what it looks like:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://prod-myapp.herokuapp.com/</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>http://prod-myapp.herokuapp.com/</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
When I am in my app, and I try to upload a file (aka...do a POST request) in my JS console, I get this error:
XMLHttpRequest cannot load https://myapp.s3.amazonaws.com/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://prod-myapp.herokuapp.com' is therefore not allowed access. The response had HTTP status code 403.
I attempted to do a POST from my CLI and I got this:
$ curl -v -H "Origin: http://prod-myapp.herokuapp.com" -X POST https://myapp.s3.amazonaws.com
* Rebuilt URL to: https://myapp.s3.amazonaws.com/
* Trying XX.XXX.XX.153...
* Connected to myapp.s3.amazonaws.com (XX.XXX.XX.153) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
* Server certificate: *.s3.amazonaws.com
* Server certificate: VeriSign Class 3 Secure Server CA - G3
* Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5
> POST / HTTP/1.1
> Host: myapp.s3.amazonaws.com
> User-Agent: curl/7.43.0
> Accept: */*
> Origin: http://prod-myapp.herokuapp.com
>
< HTTP/1.1 412 Precondition Failed
< x-amz-request-id: SOME_ID
< x-amz-id-2: SOME_ID_2
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Thu, 17 Sep 2015 04:43:28 GMT
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host myapp.s3.amazonaws.com left intact
<Error><Code>PreconditionFailed</Code><Message>At least one of the pre-conditions you specified did not hold</Message><Condition>Bucket POST must be of the enclosure-type multipart/form-data</Condition><RequestId>SOME_ID</RequestId><HostId>SOME_HOST_ID</HostId></Error>
I just added the CORS rule that applies to the domain I am trying from about 10 - 15 minutes ago. But I was under the impression that it should happen immediately.
Is there some remote cache that I need to bust to get my browser to work? I tried it both in normal mode and in Incognito Mode.
Also, based on the results from curl, it seems as if I am no longer getting an Access-Control-Allow-Origin header error, right? So, theoretically, it should be working in my browser.
Am I misreading what is happening at the command-line?
What else am I missing?
This is a slightly solution that what I have done.
I set up the policy in S3 to allow put content to bucket by only the restrict domain as referer
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::myapp/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://prod-myapp.herokuapp.com/*"
}
}
}
]
}
so you can test the PUT method by
curl -v -H "Referer: http://prod-myapp.herokuapp.com/index.php" -H "Content-Length: 0" -X PUT https://myapp.s3.amazonaws.com/testobject.jpg
The error message from curl says:
At least one of the pre-conditions you specified did not hold
Bucket POST must be of the enclosure-type multipart/form-data
You can make curl use the content-type "multipart/form-data" by using the -F option (e.g. "-F name=value"). You can use this multiple times to add all of the form parameters that you need. This page lists the parameters expected by S3:
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
Specifying "file" and "key" gets you to the point where it fails with an "Access Denied" error. I assume that you've set it to be private, so you probably need the "access-key-id" or similar to get beyond this point.
curl -v -H "Origin: http://prod-myapp.herokuapp.com" -X POST \
https://myapp.s3.amazonaws.com -F key=wibble -F file=value
Also, based on the results from curl, it seems as if I am no longer getting an Access-Control-Allow-Origin header error, right? So, theoretically, it should be working in my browser.
It seems actually to make no difference whether you specify the -H origin option, so I'm not sure if your CORS setting is actually having any effect.
Check which requests you send to the server, before POST request can be sent OPTIONS request (chrome do it)
I got Precondition Failed error for CORS because only POST method was allowed, allowing OPTIONS method resolved this problem.

Github API 502 error

I'm trying to add a user to a Github repository via their API, but I always get a 502 Bad Gateway error.
With curl I send a request like this (<...> replaced by a real owner, repo, etc.):
curl -i -H 'Authorization: token xxxxxxxxxx' -XPUT https://api.github.com/repos/<owner>/<repo>/collaborators/<username>
I also tried it with this url:
curl -i -H 'Authorization: token xxxxxxxxxx' -XPUT https://api.github.com/teams/<id>/members/<username>
As token I used a newly created Personal Access Tokens
But both times I get this back
HTTP/1.0 502 Bad Gateway
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
A GET on each URL works fine but a DELETE doesn't work either. So maybe it has to do with curl.
Quoting the reply from GitHub's support with changes in italic:
You're just getting trolled by HTTP and curl.
When you make a PUT request with no body, curl doesn't explicitly set a Content-Length header for that request. However, PUT requests with no Content-Length confuse servers and they respond in weird ways.
Can you please try explicitly setting the Content-Lenght header to 0, or supplying an empty body when making that request (so that curl can set the header for you)? You can accomplish that adding -d "" in your command.

Does Amazon S3 need time to update CORS settings? How long?

Recently I enabled Amazon S3 + CloudFront to serve as CDN for my rails application. In order to use font assets and display them in Firefox or IE, I have to enable CORS on my S3 bucket.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Then I used curl -I https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz, I got:
HTTP/1.1 200 OK
x-amz-id-2: Ovs0D578kzW1J72ej0duCi17lnw+wZryGeTw722V2XOteXOC4RoThU8t+NcXksCb
x-amz-request-id: 52E934392E32679A
Date: Tue, 04 Jun 2013 02:34:50 GMT
Cache-Control: public, max-age=31557600
Content-Encoding: gzip
Expires: Wed, 04 Jun 2014 08:16:26 GMT
Last-Modified: Tue, 04 Jun 2013 02:16:26 GMT
ETag: "723791e0c993b691c442970e9718d001"
Accept-Ranges: bytes
Content-Type: text/javascript
Content-Length: 39140
Server: AmazonS3
Should I see 'Access-Control-Allow-Origin' some where? Does S3 take time to update CORS settings? Can I force expiring headers if its caching them?
Try sending the Origin header:
$ curl -v -H "Origin: http://example.com" -X GET https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz > /dev/null
The output should then show the CORS response headers you are looking for:
< Access-Control-Allow-Origin: http://example.com
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Credentials: true
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Additional information about how to debug CORS requests with cURL can be found here:
How can you debug a CORS request with cURL?
Note that there are different types of CORS requests (simple and preflight), a nice tutorial about the differences can be found here:
http://www.html5rocks.com/en/tutorials/cors/
Hope this helps!
Try these:
Try to scope-down the domain names you want to allow access to. S3 doesn't like *.
CloudFront + S3 doesn't handle the CORS configuration correctly out of the box. A kludge is to append a query string containing the name of the referring domain, and explicitly enable support for query strings in your CloudFront distribution settings.
To answer the actual question in the title:
No, S3 does not seem to take any time to propagate the CORS settings. (as of 2019)
However, if you're using Chrome (and maybe others), then CORS settings may be cached by the browser so you won't necessarily see the changes you expect if you just do an ordinary browser refresh. Instead right click on the refresh button and choose "Empty Cache and Hard Reload" (as of Chrome 73). Then the new CORS settings will take effect within <~5 seconds of making the change in the AWS console. (It may be much faster than that. Haven't tested.) This applies to a plain S3 bucket. I don't know how CloudFront affects things.
(I realize this question is 6 years old and may have involved additional technical issues that other people have long since answered, but when you search for the simple question of propagation times for CORS changes, this question is what pops up first, so I think it deserves an answer that addresses that.)
You have a few problems with the way you test CORS.
Your CORS configuration does not have a HEAD method.
Your curl command does not have -H header.
I am able to get your data by using curl like following. However they dumped garbage on my screen because your data is compressed binary.
curl --request GET https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz -H "http://google.com"