What does changing config.assets.version number do?
I understand that the assets expire (as it is written in the comments) but what does it do in the background?
would it delete all the compiled assets? or does it take that version number and uses it somewhere else?
It will precompile assets with another fingerprints (the code appendend to the file name), making all the client's browsers download the files again.
In other words, as you said, it expires the caches in the client's browsers.
Related
I use Responsive Filemanager for several websites that I host. I have the latest version (9.6.6) installed, and I also use the tinyMCE plugin for the Jquery tinyMCE version 4, but my problem occurs with both the standalone filemanager as well as the plugin, so I doubt this is important.
Anyhow, my problem is the following: everything seems to be working just fine when I upload files smaller than exactly 2 megabytes. Using a dummy file generator, I have been able to generate a PFD file of exactly 2097152 bytes, which uploads fine, and a PDF file of 2097153 bytes, which doesn't upload.
Responsive Filemanager always says the upload went fine (with both the Standard Uploader and the JAVA uploader), but any file bigger than 2097152 bytes doesn't actually get uploaded.
Here's a video demonstrating precicely what the problem is: https://youtu.be/NDtZHS6FYvg
Since my RF config allows files up to 100MB (see entire config here: http://pastebin.com/H9fvh1Pg), I'm guessing it might be something with my server settings? I'm using XAMPP for Windows. Could it be that there are some settings in my Apache config or something like that, which block uploads through http bigger than 2MB?
Thank you for your help!
EDIT: typo's and added links + video showing the problem.
I managed to find the solution for my own problem. I couldn't believe some sort of bug would cause any file exactly bigger than 2 MB to fail, so after a while I finally figured out it had to be something with the server itself, and indeed, in the php.ini I found the following line:
upload_max_filesize = 2M
Changing this to a bigger number fixed the problem for me. Would be nice if ResponsiveFileManager had a way of informing the user about the fact that the upload did in fact not complete successfully due to a php.ini server setting, but ah well...
You just need to change the config file of responsivefilemanager, i.e config.php
'MaxSizeUpload' => 10,
Just change the MaxSizeUpload variable and check.
If I'm deploying a site across multiple servers (to be load balanced), it seems that on one of the servers, the minified assets aren't in assets/cache. It's there in the other ones, but not this one. Is that a database thing that PyroCMS uses to check which asset file to use?
PyroCMS will not check the database for cached assets, that would be weird. It just caches them whenever they are requested. Maybe traffic isn't getting to your other machine?
I am trying to use it to cache all the static files for my application (images, JS etc.) but I am running into a problem. My cache manifest file can looks like this:
CACHE MANIFEST
CACHE:
templates/v2/css/somecss.css
templates/v2/js/somejs.js
templates/v2/images/someimages.jpg
NETWORK:
*
This does cache those files that I have added to it (a few hundred so I omitted most of them out) but it also caches pages that I don't want (ex. index.php). It dramatically lowers the loadtime of the whole application but I need it not to cache any php files. I am using MultiViews if that makes any difference.
I have also tried adding a list of the files that I don't want cached under network but it still caches them. The full file can be found at https://app.emailsmsmarketing.com/cache.manifest
The problem might not be with the manifest itself.
Are you adding the manifest attribute to all your php pages? That could be the issue.
The manifest attribute should be included on every page of your web
application that you want cached. The browser does not cache a page if
it does not contain the manifest attribute (unless it is explicitly
listed in the manifest file itself. This means that any page the user
navigates to that include a manifest will be implicitly added to the
application cache.
http://www.html5rocks.com/en/tutorials/appcache/beginner/#toc-manifest-file-reference
You can also specify the .php files which you do not want to be cached in the NETWORK section. WHichever file you specify here will be accessed from the server.
You can make use of wildcard i believe for all php files
I am working on a project that processes images, saves the processed images in a cache, and outputs the processed image to the client. Lets say that the project is located in /project/, the cache is located in /project/cache/, and the source images are located wherever else on the server (like in /images/ or /otherproject/images/). I can set up the cache to mirror the path to the source image (e.g. if the source image is /images/image.jpg, the cache for that image could be /project/cache/images/image.jpg), and the requests to the project are roughly /project/path/to/image (e.g. /project/images/image.jpg).
I would like to serve the images from the cache, if they exist, as efficiently as possible. However, I also want to be able to check to see if the source image has changed since the cached image was created. Ideally, this would all be done with mod_rewrite so PHP wouldn't need to be used to do any of the work.
Is this possible? What would the mod_rewrite rules need to be for this to work?
Alternatively, it seems like it would be a fine compromise to have mod_rewrite serve the cached file most of the time but send 1 out of X requests to the PHP script for files that are cached. Is this possible?
You cannot acces the file modification timestamp from the RewriteRule, so there is no way around using PHP or another programming language for that task.
On the other hand this is really simple in PHP, so you should first check whether the PHP solution is good enough in you case. Only if it isn't you should look for alternatives.
What if you used the client to do some of the work? Say you display an image in the web browser and always use src="/cache/images/foobar.jpg" and add an onerror="this.src='/images/foobar.jpg'". In mod_rewrite, send anything that goes to the /images/ dir to a script that will return and generate an image in the cache.
One of the responsibilities of my Rails application is to create and serve signed xmls. Any signed xml, once created, never changes. So I store every xml in the public folder and redirect the client appropriately to avoid unnecessary processing from the controller.
Now I want a new feature: every xml is associated with a date, and I'd like to implement the ability to serve a compressed file containing every xml whose date lies in a period specified by the client. Nevertheless, the period cannot be limited to less than one month for the feature to be useful, and this implies some zip files being served will be as big as 50M.
My application is deployed as a Passenger module of Apache. Thus, it's totally unacceptable to serve the file with send_data, since the client will have to wait for the entire compressed file to be generated before the actual download begins. Although I have an idea on how to implement the feature in Rails so the compressed file is produced while being served, I feel my server will get scarce on resources once some lengthy Ruby/Passenger processes are allocated to serve big zip files.
I've read about a better solution to serve static files through Apache, but not dynamic ones.
So, what's the solution to the problem? Do I need something like a custom Apache handler? How do I inform Apache, from my application, how to handle the request, compressing the files and streaming the result simultaneously?
Check out my mod_zip module for Nginx:
http://wiki.nginx.org/NgxZip
You can have a backend script tell Nginx which URL locations to include in the archive, and Nginx will dynamically stream a ZIP file to the client containing those files. The module leverages Nginx's single-threaded proxy code and is extremely lightweight.
The module was first released in 2008 and is fairly mature at this point. From your description I think it will suit your needs.
You simply need to use whatever API you have available for you to create a zip file and write it to the response, flushing the output periodically. If this is serving large zip files, or will be requested frequently, consider running it in a separate process with a high nice/ionice value / low priority.
Worst case, you could run a command-line zip in a low priority process and pass the output along periodically.
it's tricky to do, but I've made a gem called zipline ( http://github.com/fringd/zipline ) that gets things working for me. I want to update it so that it can support plain file handles or paths, right now it assumes you're using carrierwave...
also, you probably can't stream the response with passenger... I had to use unicorn to make streaming work properly... and certain rack middleware can even screw that up (calling response.to_s breaks it)
if anybody still needs this bother me on the github page