Prevent Amazon S3 bucket forcing download - amazon-s3

I'm using an Amazon S3 bucket to store user uploads for public access.
When you click a link to the resource it seems to force a download even when the files could be viewed in the browser (ie JPGs, etc).

As pointed out by TheZuck the problem was the content type wasn't being set at the point the file was uploaded.
I'm using an Amazon S3 PHP class (http://undesigned.org.za/2007/10/22/amazon-s3-php-class) so simply had to add the content type (mime_content_type($file) in PHP) when using the putObjectFile method:
$s3class->putObjectFile($file, S3BUCKET, $target_location, S3::ACL_PUBLIC_READ, NULL, mime_content_type($file));

Related

How to setup S3 with Pocketbase

I am new to Pocketbase. I found that Pocketbase offers an S3 file system config but I dont know how to set it up completely. Currently, I am uploading my images to S3 separately then save the link to the DB.
I am setting my bucket publicly accessible, if possible, do you know if I can set my bucket only accessible to pocketbase?

After saving file in s3 via boto3, it is being downloaded instead of being viewed

I am saving some html content to Amazon s3, from my flask api using boto3 module with the code
s3.Object(BUCKET_NAME, PREFIX + file_name+'.html').put(Body=html_content)
The file is being stored in s3 but when I am going to view it it is just getting downloaded instead of being viewed. I would rather try to view the file instead of downloading it. How to fix it from boto3 commands? Kindly help me.
Go to the S3 bucket and browse to the file > properties > metadata, there is a key called Content-Type that tells AWS what kind of content it is, it's probably set to binary so it will only be downloaded at the moment, like in this screenshot:
If you change this value to "text/plain" for example it will attempt to view it.

How to make browser download html when its content changed in s3?

I am using s3 bucket to host my web site. Whenever I release a new version of my web site, I want all clients download it from s3 instead of reading from their browser cache. I know I can set up an expire time for the object saved on s3 bucket but it is not an idea solution since users have to use the cached content for a period of time. Is there a way to force browser to download the content if they are changed in s3 bucket?
Irrespective of whether you are using s3 bucket for hosting or any other hosting server, caching can be controlled by appending hash number to file name.
For example your js file bundle name should be like bundle.7e2c49a622975ebd9b7e.js.
When you deploy it again it will change to some other hash value bundle.205199ab45963f6a62ec.js.
By doing this, browser automatically knows that, new file has arrived and should be downloaded again.
This can be easily done using any popular bundlers like grunt, gulp, webpack.
webpack example

s3fs disable cache

I have problem with viewing video from my bucket on S3.
I'm using EC2 instance. Bucket mounted as folder via s3fs. When i try to load a big file i have a pause before starting download. In this pause, i see that file download (cache) to EC2. When it was cached, file start to download in browser.
I try to configure s3fs and disable cache, but option -o use_cache="" doesn't work. I try to use s3fslite, but it is also cache files before sending it to user.
How to disable caching? Maybe there is some faster solution, that can help me to use s3 bucket like folder on EC2?
You don't need to download the files, either serve them directly from s3, or use cloudfront.
If you are trying to control access to the file. Use signed URLs which will give them user a certain amount of time to access the file before the link expires.

Parallel Download from S3 to EC2

I was reading this blog entry about parallel upload into S3 using boto. Near the end it suggests a few tools for downloading using multiple connections (axel, aria2, and lftp). How can I go about using these with S3? I don't know how to pass the authentication keys to Amazon to access the file. I can, however, make the file public temporarily, but this solution is non-optimal.
Generate a signed url using the AWS API and use that for your downloads. Only someone with the signed url (which expires in the given timeout) can download the file.