How to make browser download html when its content changed in s3? - amazon-s3

I am using s3 bucket to host my web site. Whenever I release a new version of my web site, I want all clients download it from s3 instead of reading from their browser cache. I know I can set up an expire time for the object saved on s3 bucket but it is not an idea solution since users have to use the cached content for a period of time. Is there a way to force browser to download the content if they are changed in s3 bucket?

Irrespective of whether you are using s3 bucket for hosting or any other hosting server, caching can be controlled by appending hash number to file name.
For example your js file bundle name should be like bundle.7e2c49a622975ebd9b7e.js.
When you deploy it again it will change to some other hash value bundle.205199ab45963f6a62ec.js.
By doing this, browser automatically knows that, new file has arrived and should be downloaded again.
This can be easily done using any popular bundlers like grunt, gulp, webpack.
webpack example

Related

Migrate moodle data to Amazon s3

I have migrated moodle data directory to Amazon s3. Now I am trying to access all the files from the s3 storage using this plugin moodle-tool_objectfs
Attaching my settings screenshot. I am trying to access all the media files amazon s3 instead from server file system. Example, site logo, course materials in PDF format, etc.,
Thanks for the shout-out Russell!
It sounds like you have manually migrated the content to S3 rather than relying on this plugin to do the work for you. I'd guess that your manual migration has put the files into a structure/path that the plugin isn't expecting. especially if you have copied your complete moodledata folder into S3 and not just the uploaded user files. (The tool_objectfs plugin does not replace the need for a normal moodledata directory, it just allows the majority of your files to be stored in S3.)
Usually you would have a Moodle site set up with a normal moodledata directory and then you would install our tool_objectfs plugin which would migrate files from moodledata to your s3 storage, relying on the plugin to perform the migration for you.

Cloudfront caching react website pages despite using file versioning

So to explain a problem I have a static website hosted on s3 and CloudFront as the CDN. I have used create-react-app(CRA) to create the react package for my website. CRA by default does versioning of webpack build files and the versions are visible in the s3 bucket as well.
Still when I do a deployment, the latest changes don't come up(I have waited even a day hoping it would come). I am not sure what is causing this issue. Can anyone please help.
I have added the screenshots of my cloudfront behaviour tab and the s3 bucket files having build versions.
Ps, If it is the case of browser cache how can I disable it so that my clients always see the most latest version of my website.
Hi you have to invalidate cache in the distribution settings tab. I ideally invalidate all cache by passing /* or you can also specify folder or file to clear cacheing.
example: /index.html
Even in your CI/CD pools line you can ask deploy agent to invalidate cache by passing distribution ID and path

AWS s3 configuration to avoid waiting time for multiple request

I have static content uploaded on S3 bucket.
When I hit URL for the First time, the contents take while to load. It has a single html page with multiple CSS and JS.
So is there any kind on configuration needed at S3 level to optimize.
I am trying to figure out settings such as number of connections like we have in Apache.
There are no configurations available for Amazon S3. It just works!
Some ideas for speeding your download:
Create a bucket that is located closer to you/your users (less latency)
Zip your files before uploading to Amazon S3 (faster download)
Check the Network console in your web browser to determine where the time is being taken

s3fs disable cache

I have problem with viewing video from my bucket on S3.
I'm using EC2 instance. Bucket mounted as folder via s3fs. When i try to load a big file i have a pause before starting download. In this pause, i see that file download (cache) to EC2. When it was cached, file start to download in browser.
I try to configure s3fs and disable cache, but option -o use_cache="" doesn't work. I try to use s3fslite, but it is also cache files before sending it to user.
How to disable caching? Maybe there is some faster solution, that can help me to use s3 bucket like folder on EC2?
You don't need to download the files, either serve them directly from s3, or use cloudfront.
If you are trying to control access to the file. Use signed URLs which will give them user a certain amount of time to access the file before the link expires.

How to receive an uploaded file using node.js formidable library and save it to Amazon S3 using knox?

I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)