I am using Memcached in my Ruby on Rails 3 app. It works great with action and fragment caching, but when I try to use page caching, the page is stored in the filesystem instead of in Memcached. How can I tell Rails to use Memcached for page caching too?
In my development.rb file:
config.action_controller.perform_caching = true
config.cache_store = :mem_cache_store
You cant. The equivalent of page caching in memcached is action caching, because the request must be served through Rails. Page caching is meant to bypass Rails, so the data must be stored in a file that can be served from the server, like Nginx or Apache. The reason page caching is so fast is that it does bypass Rails entirely. Here is what the Rails documentation says:
Page caching is a Rails mechanism
which allows the request for a
generated page to be fulfilled by the
webserver (i.e. apache or nginx),
without ever having to go through the
Rails stack at all. Obviously, this is
super-fast. Unfortunately, it can’t be
applied to every situation (such as
pages that need authentication) and
since the webserver is literally just
serving a file from the filesystem,
cache expiration is an issue that
needs to be dealt with.
You can find more information here.
check this :
http://globaldev.co.uk/2012/06/serving_memcached_pages_from_nginx/
Cutting it shortly, install "memcaches_page" gem (add it to GemFile then bundle), then change caches_page directive to memcaches_page, then configure Nginx to serve page memcached server before hitting the application (described in the article) .
Related
I have some react code (written by someone else) that needs to be served. The preferred method is via a Google Storage Bucket, fronted by their Cloud CDN, and this works. However, due to some quirks in the code, there is a requirement to override 404s with 200s, and serve content from the homepage instead (i.e. if there is a 404, don't serve a 404, serve the content of the homepage and return as a 200 instead)
(If anyone is interested, this override currently is implemented in CloudFront on AWS. Google CDN does not provide this functionality yet)
So, if the code is served at "www.mysite.com/app/" and someone hits "www.mysite.com/app/not-here" (which would return a 404), what should happen is that the response should NOT be 404, but a 200 with the content being served from index.html instead.
I was able to get this working by bundling all the code inside a docker container and then using the solution here. However, this setup means if we have a code change, all the running containers need to be restarted, and the customer expects zero downtime, hence the bucket solution.
So I now need to do the same thing but with the files being proxied in (with the upstream being the CDN).
I cannot use the original solution since the files are no longer local, and httpd can't check for existence of something that is not local.
I've tried things like ProxyErrorOverride and ErrorDocument, and managed to get it to redirect, but that is not what is needed.
Does anyone know how/if this can be done?
If the question is: how to catch the 404 error provided by Cloud Storage when a file is missing with httpd/apache? I don't know.
However, I think that isn't the best solution. Serving files directly from Cloud Storage is convenient but not industrial.
Imagine, you deploy several broken files successively, how to rollback in a stable format?
The best is to package your different code release in an atomic bag, a container for instance. Each version are in a different container and performing a rollback is easier and consistent.
Now your "container restart" issue. I don't know on which platform you are running your container. If your run it on a Compute Engine (a VM) it's maybe the worse solution. Today, there is container orchestration system that allows you to deploy, scale up and down the containers, and to perform progressive rollout, to replace, without downtime, the existing running containers by a newer version.
Cloud Run is a wonderful serverless solution for that, you also have Kubernetes (GKE on Google Cloud) that you can use with Knative for a better developer experience.
I took over a clumsily-installed Rails app. Its assets are broke. Chrome Audit returns:
> Leverage browser caching
The following resources are missing a cache expiration.
Resources that do not specify an expiration may not be cached by browsers:
jquery-1.8.3.js
jquery-ui-1.8.17.custom.min.js
rails.js
application.js
jquery.ui.base.css
jquery.ui.theme.css
...etc.
This obviously churns our network. Wat do? Where in Rails-land, or Plesk's vhost.conf file, does one add a line of configuration so the correct HTTP headers go out?
Please don't tell me "just rebuild the assets" - the rebuild is slightly broken.
have a look at Andre Spannig's page if you like tweak the web server configuration for your current domain only: Plesk 10 and vhost.conf.
This helped me once on a different issue.
There may be a better way through Rails but I am not aware of one right now.
I have installed the Varnish cache with my Apache web server and configured them correctly. It works OK and I can now access my web pages though Varnish Cache.
The default behavior of varnish is to store copies of the pages served by the web server. The next time the same page is requested, Varnish will serve the copy instead of requesting the page from the Apache server.
And now comes my question: Is it possible to cache my entire website initially after setting up the Varnish cache, without the need to have a page to be accessed then store it on the cache? This is because, after varnish has been setup, the cache is initially empty, and it will require a page to be accessed in order to be available on the cache. Can this be done without having to access each page manually?
What you are looking for is a way of warming up the cache. You could use varnishreplay or a Web crawler, such as Wget or HTTrack to go through your site. Alternatively if you have a sitemap of your pages you could use that as a starting point and warm up the cache by looping over it and issuing requests on the pages using e.g. curl or wget.
Using varnishreplay requires you to first run varnishlog and gather a log of traffic before you can use it later for playing back the traffic and warming up the cache.
Wget, HTTrack etc. can be pointed to your home page and they will crawl their way through your site. Depending on the size and nature of your site this might not be practical though (for example if you use Ajax extensively).
Unless your pages take a very long time to load from the backend server (i.e. Apache), I wouldn't worry too much about warming up the cache. If the TTL for the cached content is high enough most of the visitors will only ever receive cached content anyway.
There is a much better way to do this which employs req.hash_always_miss and works with Varnish 3 and 4 (employs sitemap too). It warms up your cache and refreshes old pages without having to purge the cache. Full diagram, outline of how to configure it and 3 scripts for various use cases are outlined here http://www.htpcguides.com/smart-warm-up-your-wordpress-varnish-cache-scripts/ and are easily adapted for non-Wordpress sites.
I have several custom routes in my rails 3.0 app, the simplest one of which is.
match "*user", "profile#index", :via => :get
Because of that route physical locations on the server are killed. As an example.
/images/rails.png
tries to route to the images user.
I also have to be able to setup where people access
/<username>/archive.zip
So
/buddy/archive.zip
Where the archive.zip is a physical file on the server that has been generated and put there. How can I achieve this in my routing.
For the later I have an actual folder structure in a root folder for /<username>/archive.zip so I was thinking somem sort of symlink would be easy, but without being able to hit physical locations on the server. I am kind of stuck/confused.
Any help is appreciated.
You probably want to have any static assets get handled by your web server before hitting your rails stack. This is generally done by setting the document root in your webserver to the public/ directory within your rails app to serve your static images/css/js.
This is greatly preferred over allowing Rails to serve static assets because web servers are much faster at handling these sorts of requests, and your not tying up your rails processes for these requests, which are often limited to less than a handful.
I'm trying to understand how dynamic caching is done in Apache.
I read the Caching Guide of Apache and an article about dynamic caching, but still don't understand exactly the internals of how dynamic caching works.
Say for example I have a PHP page that serves content through reading from a database according to parameters in the user's URL query-string (or parameters specified in POST).
e.g. www.mySite.com?articleID=31
How is that cached then?
Does mod_cache keeps the content retrieved from the database for this specific article?
Any sources or suggestions are welcomed.
It caches the output of your script, the HTML. What I'm not sure if CacheStorePrivate On will cache PHP scripts with no cache headers. I'm using Apache 2.4.17 and look like it doesn't.