Maintain use of non-digested assets url with rails 5 - ruby-on-rails-5

In upgrading to rails 5, some legacy asset urls needed to be converted from from an explicit form like /assets/pretty_image.png to image_url('pretty_image.png'). This a straight forward change and allows proper caching. But there are emails out there with the old form url. Is there anyway I can continue to let urls like /assets/pretty_image.png still work in production?

It turns out this is a problem with upgrading from rails 4, which stopped allowing both digested and non-digested assets being used. For some reason, the problem did not appear in production until I upgraded to rails 5.
There is a good discussion of some possible solutions here. I used the solution entitled 'Use the Manifest, Rake'. I needed to change from pre-compiling on Heroku to precompiling in development to get the solution to work. I am interested in a solution that still allows precompiling on Heroku.

Related

Error deploying vue app using any type of deployment

I have only deployed a couple small apps before and I am still newer to deploying apps in general.
I created this app by following a course and have recently finished the project. The course did not provide instructions on how to deploy the app. I have used Firebase hosting a couple times and am also somewhat familiar with Heroku. Regardless, it all seems pretty straight forward after following documentation.
I first tried Firebase hosting since that is what I am most familiar with. Spent some time with that with no luck, then tried heroku, then netlify, then NOW. Every single one of them had issues without any real information on them.
NOW says deployement failed with no logs.
Firebase hosting doesn't seem to be logging any errors, it builds a blank page.
Netlify says page not found after deployment and Heroku was something similar.
I am 100% open to getting this simple app deployed using any approach at all (preferably the easiest one).
Since I am following documentation and there doesn't seem to be any errors being logged, I'm completely stumped and am not sure what to do.
I realize I might not be providing the most helpful information to solve this issue, although I do have my full repo here:
https://github.com/SIeep/austin-pizza
Would anyone be kind enough to look over my repo and see what the issue might be? Or even point me in the right direction?
Please let me know if I need to provide any additional information.
Thanks!
missing entry file or file path problem ?
Try to find out which stage the problem is at first.
Compare this and last successful Firebase Configuration(dependency path),
Compare this and the last build dist file (not detail code,just File structure)
compare webpack.config.js
(app can run well locally,so i think it wouldn't because of the problem with the app's own code)

Restarting only a portion of a rack/Sinatra app

The great thing about PHP is that if you have something like
clothes.com, clothes.com/men.php, clothes.com/women.php
Then if you only edit the men's page, only that particular "app" will be restarted.
But on rack/Sinatra I have to touch the restart.txt file to restart the ENTIRE website.
Is there a way around this problem, so that users browsing other parts of the site wont have any problems while another part of the site get edited?
(i'm using mod-passenger on Apache, not that it's important..)
This would be true in all cases anyway for editing (non-inline) views (not layouts).
Aside from that, if you're really worried about this then I'd suggest using versioned folders to hold the application code. When you do a deployment, change the proxy to point at the newer version. Those who had already made requests will remain on an instance of Apache and the application that is already running, as long as their request remains alive, and seemlessly (unless you've broken something with the code) move to the new code on the next request.
It's also a convenient way to rollback to the/a previous version quickly and easily.
Check out the sinatra reloader from sinatra contrib

Integration of Awstats into Ruby on Rails

I'm currently working on a RoR project, and stumbled upon a problem.
What I have:
A RoR 3.1 project running on Nginx on a debian server.
What I want:
I want to be able to see the web statistics of my website. Statistics like the amount of hits, the source and perfectly ordered per some predefined time interval. Preferably with a few cool graphs, charts tables, whatever.
What I did:
I looked on the internet for some RoR extensions which supported this, but didn't got very excited with the results. Therefore I looked at tools like 'Webalizer' and 'Awstats'. Finally decided to go with Awstats.
What seems to be the problem:
Now I can access the main page of Awstats, but once I want to look at other months, it sends a request to awstats.pl. This request is (I think) send from Nginx to RoR first. Then RoR looks in its routes.rb for the correct route. Then RoR can't find the route, and redirects me to the 404 error.
I would like to know if there is somebody out there who has some experience in these kind of things. Maybe knows how to configure Awstats correctly for RoR, or either knows another good statistics tool for RoR.
Any help will be greatly appreciated.
-Ron

Are there any performance/functionality differences between installing the New Relic RPM as a gem vs. as a Heroku add-on?

I am hosting my Rails 3 application on Heroku and would like to add New Relic monitoring.
I notice that Heroku has an add-on that I suppose sets everything up for you, but I also notice that it doesn't create a "real" New Relic account - instead, it creates a Heroku-specific New Relic account that you can only access through Heroku.
What I am curious about is: are there any differences in ...
Functionality
Mainly, does the Heroku-specific add-on offer any additional Heroku-specific features other than configuring the service for you? It seems to me that, if not, it might be better to use the gem so as to avoid any mysterious Heroku monkey-patching?
Configurability
Does only being able to access the New Relic account via Heroku have any downsides (other than the annoying Heroku header frame taking up the top of every screen)?
Performance
Does the Heroku set-up afford any performance benefits over self-installation of the gem?
Cost
It looks like the Heroku New Relic add-on charges by the dyno. Does this make it more or less expensive overall than a comparable plan directly through New Relic? If it's more expensive, does it have any features that justify the extra cost other than the simplified configuration?
Thanks all!
Functionality
AFAIK it does not add any functionality besides adjusting and viewing through Heroku.
Configurability
This is the biggest upside. Heroku is all about making things easier and with the plugin all you have to do is install it you are ready to go.
Performance
Heroku plugins are essentially heroku-gems that get pulled in somewhere along the line. We would need to research how the plugin was made; obviously if the plugin used something faster than ruby then it would probably be faster than the new-relic gem which is just made of ruby.
Cost
I do not think you are missing anything here. You will end up paying two parties, so whatever Heroku charges that is your extra cost being $0.06 dyno.

Capistrano deployment with lots of images

So we have this basic Rails 3 website with capistrano 2.5.19 plus multi-stage extension.
The site is simple, but it has 40,000+ of images out there. So deployments take a long time, going both to our QA server and production. The issue is not usually network load, because capistrano only downloads what changed in svn. The issue is the time it takes for our servers to backup the old release (40k worth of images) and copy the new release (another 40k of images.)
Does anyone know of a best practice approach to this? Is the only way to split this into two SVN folders and two deployment scripts combined with some symlink magic? Or can i tell capistrano to exclude the images on certain deployments where I know images have not changed?
Well, we have this issue too. A solution is a library called fast_remote_cache if you're on linux.
https://github.com/37signals/fast_remote_cache
The idea is that it hard links to the cache so the copy is much faster. Once the site finally gets large enough that even this takes too long, then it is time to consider asset servers.
It's probably better not to have all those images in your repository, or at least in a different repository.
You'll want to see about setting up an asset server. They're easy to hook into Rails, as long as you use the XXX_tag helpers. And you could just have the asset server run plain old Apache - not need for anything dynamic on it...
You might also be able to hook a "cloud" file store (I'm thinking Amazon S3, but there are plenty of others) in to serve the same purpose - they'll provide file backup (and version control, in some cases), and you won't even have to worry about running the asset server yourself.
Hope this helps!