How to serve a static html that has a php extension with S3? - amazon-s3

I can't change the extension of the file, but I need to serve it as a html file. No php inside.

That will work fine. Set the Content-Type to text/html.
You can do this in the S3 console, described in How do I add metadata to an S3 object? or when you initially upload the file.

Related

uncompress gzipped assets with custom extension in browser

We want to serve gzipped files to the browser and it seems to be working with common file extensions such as css, txt, js etc.
However, when we change the extension to something else for e.g. filename.abc browser does not uncompress gzipped file even though Content-Encoding header is gzip. We have tried various combinations with Content-Type header.
How can we keep the custom extension of a file (say .abc) and still have browsers uncompress gzipped file automatically based on header info?
We were adding Content-Encoding, Content-Type headers via AWS cli s3 cp --metadata option. Setting it with --content-encoding and --content-type worked fine.

Is there a way to reupload a file to S3 without having to reset a custom MIME type?

I have two xml files that need (or really want) to be served with specific MIME types that S3 doesn't serve by default. The files are sitemap.xml and rss.xml served as application/xml and application/rss+xml respectively.
I am able to set the Content-Type header for these files no problem.
The problem is every time my site changes these files change. I should say that my site is completely static from a web server perspective. My site is updated by me building the files locally and uploading them to S3. When I upload my updated sitemap.xml and rss.xml files though, S3 nukes my custom Content-Type settings.
Is there a way to get it to associate these settings with the name of the file as opposed to the instance of the file?

s3_website setting MIME type for files without extension using

I'm trying to deploy a Jekyll site. Here's the flow:
Content is added and pushed to BitBucket
BitBucket Pipelines builds the site
Finds all HTML files in _site/ and removes their extension
Uses s3_website push (s3_website) to push contents to the designated S3 bucket
I'm removing the extension from the HTML files since I need clean URLs. Although, there's an additional step required to set the MIME type on these files to ensure S3 serves them correctly.
Somehow, the MIME type is being detected by itself as of now, and the site works, but I'm uncomfortable not having control over it. So, I tried to add the following to s3_website.yml to set the MIME type:
content_type:
"*": text/html
But that breaks the site.
How do I set s3_website to pick only those files that do not have an extension, and set the MIME type only to them?
The site would work without setting the MIME type, if Tika is able to correctly detect the content type by itself.
In case users need more control over this, the s3_website gem includes a YAML configuration that can handle this version 1.15.0.
Add this to s3_website.yml:
extensionless_mime_type: text/html
This sets the MIME type for all files that don't have an extension, to text/html.

Access js and css files from API gateway and lambda

I have an API Gateway in AWS that calls a a lambda function that returns some html. That html is then properly rendered on the screen but without any styles or js files included. How do I get those to the client as well? Is there a better method than creating /js and /css GET endpoints on the API Gateway to go get those files? I was hoping I could just store them in S3 and they'd get autoloaded from there.
Store them on S3, and enable S3 static website hosting. Then include the correct URL to those assets in the HTML.
I put in the exact address of each js/css file I wanted to include in my html. You need to use https address, not the http address of the bucket. Each file has it's own https address which can be found by following Mark B's instructions above. Notably, going through the AWS admin console, navigate to the file in the S3 bucket, click the "Properties" button in the upper right, copy the "Link" field, and post that into the html file (which was also hosted in S3 in my case). Html looks like this:
<link href="https://s3-us-west-2.amazonaws.com/my-bucket-name/css/bootstrap.min.css" rel="stylesheet">
I don't have static website hosting enabled on the bucket. I don't have any CORS permissions allowing reading from a certain host.

Handling file uploads with Restler

What is the best practice to implement file uploads using Restler framework?
I like to have a API call that get the file save it in CDN and return back the CDN file URL to the caller. What is the best way to implement it?
File upload to CDN using our API
This requires two steps, first is to get the file on the API server.
add UploadFormat to the supported formats
Adjust the static properties of UploadFormat to suit your need
From your api method use $_FILES and move_uploaded_file to get the file to the desired folder. This step is common for any php upload process.
Now that you have the file on the server
Upload it to CDN. You can use any means provided my your CDN. It can be ftp or using some SDK to do the upload
Construct the CDN url and return it to the client