I have several custom routes in my rails 3.0 app, the simplest one of which is.
match "*user", "profile#index", :via => :get
Because of that route physical locations on the server are killed. As an example.
/images/rails.png
tries to route to the images user.
I also have to be able to setup where people access
/<username>/archive.zip
So
/buddy/archive.zip
Where the archive.zip is a physical file on the server that has been generated and put there. How can I achieve this in my routing.
For the later I have an actual folder structure in a root folder for /<username>/archive.zip so I was thinking somem sort of symlink would be easy, but without being able to hit physical locations on the server. I am kind of stuck/confused.
Any help is appreciated.
You probably want to have any static assets get handled by your web server before hitting your rails stack. This is generally done by setting the document root in your webserver to the public/ directory within your rails app to serve your static images/css/js.
This is greatly preferred over allowing Rails to serve static assets because web servers are much faster at handling these sorts of requests, and your not tying up your rails processes for these requests, which are often limited to less than a handful.
Related
I've created a small Express app that essentially serves as a file browser for our department's work. Users can drag their files and folders onto a network drive, and the app presents this folder structure as a browsable web directory for my colleagues to view various simple static files such as html files, images, css and javascript.
This is extremely business critical, and has worked flawlessly for over a year now, but there is one feature that I'd like to add. Occasionally the work contained in a subdirectory is a slightly more complex project, and there would be a huge architecture/complexity benefit from it being able to reference files from its own root path. I'll try and explain with a small example:
/app
/projects
/project1
/project2
/index.html
/styles.css
/finished
/project3
It would be great if there was a simple way I could declare the base url of project 2 to be /app/projects/project2 so that I could reference the css file from the html with href="/styles.css".
I've read that I could do this by creating a second express app for project2, and then route requests to /app/projects/project2 to that app, but this requirement crops up quite regularly and the thought of configuring/managing a multitude of sub apps without breaking the main viewer doesn't seem like fun!
Is there a simpler way? I'm thinking of a special designation in the subdirectory name e.g. "wwwproject2" that could get the app to adjust where it maps root requests to.
I'm sorry if this all sounds insane to those with more knowledge than me!
I don't think there is a way to do that.
But you could simply reference it by using the relative path to it -> href="./styles.css"
I've been trying to deploy kandan to my home server, on a subdirectory, let's say it's example.com.
I have a few constraints I'd like to see met:
must not be at website root (I'm hosting other stuff and apps)
must be secured by SSL (already implemented on example.com but not for subdomains, which I would have to pay for)
kandan does not support being hosted on subdirectories (you can't tell to it that it's hosted on example.com/kandan and have it automatically update its links)
I have listed my attempts here, but here's the gist of it:
tweakings to nginx
adding --prefix=/kandan to thin start
RAILS_RELATIVE_URL_ROOT="/kandan"
map Kandan::Application.config.relative_url_root before run Rack::URLMap... in config.ru
tweaking the content of run Rack::URLMap... in config.ru
scope "/kandan" in routes.rb
config.relative_url_root = "/kandan" in production.rb
many combinations of all the above
None of it did the job.
Currently I can show the main page with some missing elements (JS, API calls failing..) and some working elements (CSS is there..)
Is there a way to fully achieve what I want?
(LAMP server configuration)
As a workaround for another problem, I need PHP to be able to access local files, but prevent these files from being served over http by Apache.
Normally, I would just use .htaccess to accomplish this, however due to institutional restrictions, I cannot. I also can't touch php.ini, although I can use php_ini_set within php.
As a creative solution, I thought that if php executes as its own linux user (not as apache) I could use normal chown's and chmod's to accomplish this.
Again, the goal is simply to have a directory of files that apache will not display, but php can access.
I'm open to any suggestions.
Put the files outside of your web accessible root (DocumentRoot), but keep them accessible via PHP.
Suggestion:
/sites
/sites/my.site.com
/sites/my.site.com/data // <-- data goes here
/sites/my.site.com/web // <-- web root is here
Here's a thought. Set the permissions on the files to be inaccessible to even the owner, then when PHP needs them, chmod() then, read them, then chmod() them back to inaccessible.
I am using Memcached in my Ruby on Rails 3 app. It works great with action and fragment caching, but when I try to use page caching, the page is stored in the filesystem instead of in Memcached. How can I tell Rails to use Memcached for page caching too?
In my development.rb file:
config.action_controller.perform_caching = true
config.cache_store = :mem_cache_store
You cant. The equivalent of page caching in memcached is action caching, because the request must be served through Rails. Page caching is meant to bypass Rails, so the data must be stored in a file that can be served from the server, like Nginx or Apache. The reason page caching is so fast is that it does bypass Rails entirely. Here is what the Rails documentation says:
Page caching is a Rails mechanism
which allows the request for a
generated page to be fulfilled by the
webserver (i.e. apache or nginx),
without ever having to go through the
Rails stack at all. Obviously, this is
super-fast. Unfortunately, it can’t be
applied to every situation (such as
pages that need authentication) and
since the webserver is literally just
serving a file from the filesystem,
cache expiration is an issue that
needs to be dealt with.
You can find more information here.
check this :
http://globaldev.co.uk/2012/06/serving_memcached_pages_from_nginx/
Cutting it shortly, install "memcaches_page" gem (add it to GemFile then bundle), then change caches_page directive to memcaches_page, then configure Nginx to serve page memcached server before hitting the application (described in the article) .
One of the responsibilities of my Rails application is to create and serve signed xmls. Any signed xml, once created, never changes. So I store every xml in the public folder and redirect the client appropriately to avoid unnecessary processing from the controller.
Now I want a new feature: every xml is associated with a date, and I'd like to implement the ability to serve a compressed file containing every xml whose date lies in a period specified by the client. Nevertheless, the period cannot be limited to less than one month for the feature to be useful, and this implies some zip files being served will be as big as 50M.
My application is deployed as a Passenger module of Apache. Thus, it's totally unacceptable to serve the file with send_data, since the client will have to wait for the entire compressed file to be generated before the actual download begins. Although I have an idea on how to implement the feature in Rails so the compressed file is produced while being served, I feel my server will get scarce on resources once some lengthy Ruby/Passenger processes are allocated to serve big zip files.
I've read about a better solution to serve static files through Apache, but not dynamic ones.
So, what's the solution to the problem? Do I need something like a custom Apache handler? How do I inform Apache, from my application, how to handle the request, compressing the files and streaming the result simultaneously?
Check out my mod_zip module for Nginx:
http://wiki.nginx.org/NgxZip
You can have a backend script tell Nginx which URL locations to include in the archive, and Nginx will dynamically stream a ZIP file to the client containing those files. The module leverages Nginx's single-threaded proxy code and is extremely lightweight.
The module was first released in 2008 and is fairly mature at this point. From your description I think it will suit your needs.
You simply need to use whatever API you have available for you to create a zip file and write it to the response, flushing the output periodically. If this is serving large zip files, or will be requested frequently, consider running it in a separate process with a high nice/ionice value / low priority.
Worst case, you could run a command-line zip in a low priority process and pass the output along periodically.
it's tricky to do, but I've made a gem called zipline ( http://github.com/fringd/zipline ) that gets things working for me. I want to update it so that it can support plain file handles or paths, right now it assumes you're using carrierwave...
also, you probably can't stream the response with passenger... I had to use unicorn to make streaming work properly... and certain rack middleware can even screw that up (calling response.to_s breaks it)
if anybody still needs this bother me on the github page