Ruby on Rails - serving assets ourselves vs S3 - ruby-on-rails-3

We are trying to improve our site performance. As part of that, we are planning to do 2 things:
all static images are served via S3. This way, the images are served cookie-less.
we have a bunch of other static content - javascript, CSS, images such as our logo, etc. We are wondering what the best way is to serve these.
Currently, they are simply stored in the assets folder. This is nice & easy, and since Rails attaches a fingerprint to cache bust, all our current needs are met. However, going forward, we realize that this is not the right way to serve up these images (our logo, etc).
So what's the best way to serve this sort of content?
Thanks!
Ringo

If you are already using S3, then I would put all of these files on S3 too. Then use AWS CloudFront (Content Delivery Network) so that they get served up fast. The cost of CloudFront is really negligible.
You can use a gem like https://github.com/rumblelabs/asset_sync to make it easier to manage.

Related

How to go fully static in Nuxt.js including download links, images, background images?

I can not figure out how to make nuxt generate fully static website. It makes api call static and that is awesome. But all images, and download links still making request to a remote server.
Is it possible to generate fully static website where all links to external files(<img src="remote.jpg">, <a href="remote.pdf", background-image: url('remote.jpg')) will be downloaded and placed in local folder and then every url will be replaced to local files? Or nuxt does SSG only for APIs?
You could totally optimize and put all of your assets into the /static directory indeed.
It will require some CI or any kind of build step to have them properly updated, organized etc but nothing impossible (this will keep everything in the same place). Meanwhile, having resources outside of your server is not bad in the principle itself neither.

Will avoiding "bundling and minification" in lieu of serving .js and .css from Azure blob be beneficial?

We have an MVC web site deployed in a Cloud Service on Microsoft Azure. For boosting performance, some of my colleagues suggested that we avoid the bundling and minification provided by ASP.NET MVC4 and instead store the .js and .css files on an Azure blob. Please note that the solution does not use a CDN, it merely serves the files from a blob.
My take on this is that just serving the files this way will not cause any major performance benefits. Since we are not using a CDN, the files will get served from the region in which our storage is deployed all the time. Ever time a user requests a page, at least for the first time, the data will flow across the data center boundary and that will in turn incur cost. Also, since they are not bundled but kept as individual files, the server requests will be more. So we are forfeiting the benefits of bundling and minification. The only benefit I see to this approach is that we can do changes to the .js and .css files and upload them without a need to re-deploy.
Can anyone please tell me which of the two options is preferable in terms of performance?
I can't see how this would be better than bundling and minification unless the intent is to blob store your minified and bundled files. The whole idea is to reduce requests to the server because javascript processes a single thread at a time and in addition there's added download time, I'd do everything I can to reduce that request count.
As a separate note on the image side, I'd also combine images into a single image and use css sprites ala: http://vswebessentials.com/features/bundling

ImageResizer, Amazon S3 and caching

I am building a photo sharing site, and using amazon s3 for my storage. Everything is working great, except that the pages render slowly.
When I have over 100 images on the page, and requests that look like mysite/s3/bucket/image.jpg?w=200, does this mean that every image is first downloaded, and then resized? If so, how do I configure caching of thumbnails? I can't seem to find that info in the documentation.
You need the DiskCache (and possibly SourceDiskCache) plugins installed. DiskCache will cache the resized images to disk, while SourceDiskCache will cache the S3 images to disk.
If you only have a couple versions of the S3 image, output caching is sufficient, but it is definitely needed.
It's also important to think about the bandwidth requirements between the ImageResizer server and S3. If you're using EC2, make sure you're in the same region as the S3 bucket. If you're using a VM, make sure that you have a big pipe.
The bottleneck is always I/O.

Capistrano deployment with lots of images

So we have this basic Rails 3 website with capistrano 2.5.19 plus multi-stage extension.
The site is simple, but it has 40,000+ of images out there. So deployments take a long time, going both to our QA server and production. The issue is not usually network load, because capistrano only downloads what changed in svn. The issue is the time it takes for our servers to backup the old release (40k worth of images) and copy the new release (another 40k of images.)
Does anyone know of a best practice approach to this? Is the only way to split this into two SVN folders and two deployment scripts combined with some symlink magic? Or can i tell capistrano to exclude the images on certain deployments where I know images have not changed?
Well, we have this issue too. A solution is a library called fast_remote_cache if you're on linux.
https://github.com/37signals/fast_remote_cache
The idea is that it hard links to the cache so the copy is much faster. Once the site finally gets large enough that even this takes too long, then it is time to consider asset servers.
It's probably better not to have all those images in your repository, or at least in a different repository.
You'll want to see about setting up an asset server. They're easy to hook into Rails, as long as you use the XXX_tag helpers. And you could just have the asset server run plain old Apache - not need for anything dynamic on it...
You might also be able to hook a "cloud" file store (I'm thinking Amazon S3, but there are plenty of others) in to serve the same purpose - they'll provide file backup (and version control, in some cases), and you won't even have to worry about running the asset server yourself.
Hope this helps!

Anyone actually using Mosso Files (Amazon S3 competitor)?

We have a bunch of data on S3 (images) but just started reading about Mosso Files (rackspace). Sometime this month they are going to add CDN capabilities so any file you upload is part of the limelight CDN.
Anyone using this service, it's not as well documented or publicized at S3.
Yes, it's not well documented or publicized as S3. But dude it has CDN support which S3 is lack off (unless you willing to pay extra of course). Bad thing is you can't FTP into Mosso CloudFile, you will either have to upload it through web-based control panel or API. Yet, it's still cheap and worth especially with CDN.
I am using the service and it's pretty good and cost effective compare to S3.
We use it for all our client sites, from images to podcasts, and it's hand down, the best way to distribute content and make it highly available - especially at this price!
cheers