aws cli sync json file issue while setting content-type - amazon-s3

I'm trying to sync my json file to s3 with --content-type application/json , but when I inspect the response header, it is content-type: binary/octet-stream.
sh "aws s3 sync ./public ${mybucket} --exclude '*' --include '*.json' --content-type 'application/json' --cache-control public,max-age=31536000,immutable"
Appreciated any help.

Related

Download pdf files with curl using SOAP request

I am trying to download a pdf file using CURL command by sending HTTP POST request.
When I send my CURL request it download a pdf file but the file is not readable.
This the request I send: curl -H "Content-Type: text/xml; charset=UTF-8" -H "http://xxxxxxx/webservice/content/getContent:" -d #content_request.txt -X POST http://xxxxxx/xxxx/ContentService?wsdl/ -o sortieContent.pdf
(I replace the real adress by xxxxx for privacy reasons)
The thing is that even I download a pdf file it is not readable like it was corrupted.
What I understand is that curl answers with a file (cat of the content below) but there is different informations which are not the same format so the file received get corrupted.
--uuid:b47a2d96-bf98-4de9-99ae-9308d18ae599
Content-Id: rootpart*b47a2d96-bf98-4de9-99ae-9308d18ae599#example.jaxws.sun.com
Content-Type: application/xop+xml;charset=utf-8;type="text/xml"
Content-Transfer-Encoding: binary
79740application/pdfxxxxxxxFUOBqIAILPaDCmTvBRDXPhWnQQliV0ygEYrgPFVvDXw=
--uuid:b47a2d96-bf98-4de9-99ae-9308d18ae599
Content-Id: 5c3a7832-7ce4-4405-9cf6-20cb304972ca#example.jaxws.sun.com
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
%PDF-1.5
I tried replacing Content-Type: text/xml by Content-Type: application/pdf or Content-Type: application/octet-stream but it doesn't even download the content.
Is it possible to download only the pdf not the other informations so my file will be readable? How can I do it?
Thank you

Uploading .html file to S3 Static website hosted bucket causes download of .html file in browser

I have an S3 bucket with 'Static Website Hosting' enabled. If I upload a html file to the bucket via the AWS Console the html file opens successfully. If I upload the file using the AWS CLI the file is downloaded rather than displayed in the browser why?
The first file is available here: https://s3.amazonaws.com/test-bucket-for-stackoverflow-post/page1.html
The second file is available here: https://s3.amazonaws.com/test-bucket-for-stackoverflow-post/page2.html
I uploaded the first file in the AWS Console, the second was uploaded using the following command:
aws s3api put-object --bucket test-bucket-for-stackoverflow-post --key page2.html --body page2.html
The second file is downloaded because of its 'Content-Type' header. That header is:
Content-Type: binary/octet-stream
If you want it to display, it should be:
Content-Type: text/html
Try adding --content-type text/html to your put-object command.
Reference: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html

Fine Uploader cannot draw thumbnail from amazon S3

I have a form with a Fine Uploader and I am loading an initial file list (as described here)
For the list of initial files I am also returning the thumbnailUrl which points to my files in Amazon's S3.
Now I see that Fine Uploader is actually making an HTTP request to S3 and gets a 200 OK but the thumbnail is not displayed and this is what I see in the console:
[Fine Uploader 5.1.3] Attempting to update thumbnail based on server response.
[Fine Uploader 5.1.3] Problem drawing thumbnail!
Response from my server:
{"name": 123, "uuid": "...", "thumbnailUrl": "...."}
Now Fine Uploader makes a GET request to S3 to the URL specified in the thumbnailUrl property. The request goes like this:
curl "HERE_IS_MY_URL" -H "Host: s3.eu-central-1.amazonaws.com" -H "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0" -H "Accept: image/png,image/;q=0.8,/*;q=0.5" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "DNT: 1" -H "Referer: http://localhost:9000/edititem/65" -H "Origin: http://localhost:9000" -H "Connection: keep-alive" -H "Cache-Control: max-age=0"
Response: 200 OK with Content-Type application/octet-stream
Is there any configuration option for Fine Uploader that I am missing? Could it be that this is a CORS-related issue?
Fine Uploader loads thumbnails at the URL returned by your initial file list endpoint using an ajax request (XMLHttpRequest) in modern browsers. It does this so it can scale and properly orient the image preview.
You'll need a CORS rule on your S3 bucket that allows JS access via a GET request. It will look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Of course, you may need to allow other origins/headers/methods depending on whatever else you are doing with S3.

Change the default content type on multiple files that have been uploaded to a AWS S3 bucket

Using aws-cli I uploaded 5gb of files to an Amazon S3 bucket that I have made a static website. Some of the files the site references are .shtml files, but S3 has defaulted to a metadata content type of binary/octet-stream but I want those files to have a metadata content-Type of text/html. Otherwise it doesn't work in the browser.
Is there a aws-cli s3api command I can use to change the content type for all files with a .shtml extension?
You can set content type on specific file types like the following.
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude *.shtml"
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude '*' --include '*.shtml' --no-guess-mime-type --content-type text/html"
To modify the metadata on an Amazon S3 object, copy the object to itself and specify the metadata.
From StackOverflow: How can I change the content-type of an object using aws cli?:
$ aws s3api copy-object --bucket archive --content-type "application/rss+xml" \
--copy-source archive/test/test.html --key test/test.html \
--metadata-directive "REPLACE"

How to serve gzipped assets from Amazon S3

I am currently serving all of my static assets from Amazon S3. I would like to begin using gzipped components. I have gzipped and confirmed that Amazon is setting the correct headers. However, the styles are not loading.
I am new to gzipping components, so possibly I am missing something? I can't find too much information about this with Amazon S3.
For future reference to anyone else with this problem:
Gzip your components. Then remove the .gz extension leaving only the .css or .js extension. Upload the files to your bucket.
From your S3 dashboard, pull up the properties for the file that you just uploaded. Under the 'Metadata' header enter this information:
'content-type' : 'text/css' or 'text/javascript'
'content-encoding' : 'gzip'
These value options are not available by default (wtf) so you must manually type them.
I also found a solution how to do it using CLI, very useful when working with multiple files:
aws s3api put-object \
--bucket YOUR_BUCKET \
--key REMOTE_FILE.json \
--content-encoding gzip \
--content-type application/json \
--body LOCAL_FILE.json.gz
Notes:
Set content-type approppriately to what you're uploading
The file name on the server doesn't need to have the .gz extension