s3 never-ending pending audio requests - amazon-s3

I have an mp3 file on s3 (and have experienced with many other mp3 files) that is not playing in chrome (and other browsers as well: FF, safari, etc). The network dialog in chrome shows that there is a pending request that is seemingly never responded to by s3, however if I do a wget to the URL, I get an immediate response.
Additionally, if I serve the exact same file off of a server running nginx, I can access the URL in chrome as well instantaneously. I know that S3 does support byte range requests, so there should be no issue with chrome's byte range queries. Additionally, I've verified that the file is accessible, and that its content type is audio/mpeg.
Here is the file in question:
http://s3.amazonaws.com/josh-tmdbucket/23/talks/ffcc525a0761cd9e7023ab51c81edb781077377d.mp3
Here is a screenshot of the network requests in chrome for that URL:

I solved this by creating a CloudFront distribution. You need to create a distribution for your bucket. For example if you have a bucket named example-bucket, go to CloudFront and click on create distribution. Your bucket will appear in Origin Domain Name as example-bucket.s3.amazonaws.com
Now you can use example-bucket.s3.amazonaws.com url to load content.
This worked for me but I am not sure if it will work for others.

Had same exact issue with files.
Original url looked like this =>
https://my-bucket-name.s3-eu-west-1.amazonaws.com/EIR.mp4
Added CloudFront distribution and it solved all my issues.
Url changed only a bit:
https://my-bucket-name.s3.amazonaws.com/EIR.mp4
(but you can modify it a little while creating distribution / even setting your own DNS if you wish).

Related

Chromium without disable-web-security flag and cross origin for specific host

Working with aws-chrome-lambda lib, I'm using headless chromium to render data saved on S3 with presigned url.
I'd like to block access to local files and still be able to access the presigned url.
aws-chrome-lambda has --disable-web-security set up by default. I do want web security, so removing that to achieve blocking access to local fs. The problem is that it also blocks any other origin, thus I cannot access the data in S3.
I've also tried adding the flag --unsafely-treat-insecure-origin-as-secure with the remote origin without success.
Running in a sandbox is not possible with this lib.
Any idea how to tackle it?

How to make browser download html when its content changed in s3?

I am using s3 bucket to host my web site. Whenever I release a new version of my web site, I want all clients download it from s3 instead of reading from their browser cache. I know I can set up an expire time for the object saved on s3 bucket but it is not an idea solution since users have to use the cached content for a period of time. Is there a way to force browser to download the content if they are changed in s3 bucket?
Irrespective of whether you are using s3 bucket for hosting or any other hosting server, caching can be controlled by appending hash number to file name.
For example your js file bundle name should be like bundle.7e2c49a622975ebd9b7e.js.
When you deploy it again it will change to some other hash value bundle.205199ab45963f6a62ec.js.
By doing this, browser automatically knows that, new file has arrived and should be downloaded again.
This can be easily done using any popular bundlers like grunt, gulp, webpack.
webpack example

Akamai CDN Issue with URL Query Parameter

I am working on a client project, where the AKAMAI CDN has configured. They got Amazon S3 for hosting.
Problem:
I've committed the code in branch and could see the changes deployed on server in a codebase
Now I am trying to hit server URL in browser and trying to verify my code change
I couldn't see the UI change as per
I observer the CSS file URL is coming with query parameter (i.e.: server.com/css/filename.css??browserId=other&themeId=AbcTheme_WAR_abctheme&?t=125786954258&languageId=en_US&b=8569&t=1259648753695)
Now I am opening same URL in browser but now removing url query parameters from the file
This time I could see my changes in the same file
Questions:
Is this an issue related to CDN?
Is the CDN managing different versions of the same file to be served?
If so my changes should be merged into the latest file pointing to a webpage, which has url query parameters.
I know CDN will take time to refresh the pages but I am trying to verify my changes after 48 hours of the deployment.
Any help would be appreciated.
Thanks.

AWS s3 configuration to avoid waiting time for multiple request

I have static content uploaded on S3 bucket.
When I hit URL for the First time, the contents take while to load. It has a single html page with multiple CSS and JS.
So is there any kind on configuration needed at S3 level to optimize.
I am trying to figure out settings such as number of connections like we have in Apache.
There are no configurations available for Amazon S3. It just works!
Some ideas for speeding your download:
Create a bucket that is located closer to you/your users (less latency)
Zip your files before uploading to Amazon S3 (faster download)
Check the Network console in your web browser to determine where the time is being taken

Does Amazon S3 support symlinks?

I have an object which I would like to address using different keys without actually copying the object itself, like a symlink in Linux. Does Amazon S3 provide such a thing?
S3 does not support the notion of a symlink, where one object key is treated as an alias for a different object key. (You've probably heard this before: S3 is not a filesystem. It's an object store).
If you are using the static web site hosting feature, there is a partial emulation of this capability, with object-level redirects:
http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
This causes requests for "object-a" to be greeted with a 301 Moved Permanently response, with the URL for "object-b" in the Location: header, which serves a similar purpose, but is of course still quite different. It only works if the request arrives at the website endpoint (not the REST endpoint).
If you use a reverse proxy (haproxy, nginx, etc.) in EC2 to handle incoming requests and forward them to the bucket, then of course you have the option at the proxy layer of rewriting the request URL before forwarding to S3, so you could translate the incoming request path to whatever you needed to present to S3. How practical this is depends on your application and motivation, but this is one of the strategies I use to modify where, in a particular bucket, an object appears, compared to where it is actually stored, allowing me to rewrite paths based on other attributes in the request.
I had a similar question and needed a solution, which I describe below. While S3 does not support symlinks, you can do this in a way with the following:
echo "https://s3.amazonaws.com/my.bucket.name/path/to/a/targetfile" > file
aws s3 cp file s3://my.bucket.name/file
wget $(curl https://s3.amazonaws.com/my.bucket.name/file)
What this is actually doing is getting the contents of the file, which is really just a pointer to the target file, then passing that to wget (curl can also be used to redirect to a file instead of wget).
This is really just a work around though as its not a true symlink but rather a creative solution to simulate symlinks.
Symlinks no, but same object to multiple keys, maybe.
Please refer to Rodrigo's answer at Amazon S3 - Multiple keys to one object
If you're using the website serving on S3, you can do it via header x-amz-website-redirect-location
If you're not using the website serving, you can create your custom header (x-amz-meta-KeyAlias) and handle it manually.