I'm currently developing a website that's getting fairly frequent javascript updates and have just started using mod_pagespeed in an effort to ensure that customers will always have the latest code.
The docs tell me doing this will clear my pagespeed cache and force clients to get my new javascript/css:
sudo touch /var/cache/pagespeed/cache.flush
I did a test by changing some javascript code, hitting refresh on my browser to verify that I was still seeing the old code (my cache expiration is set to one day), then restarting apache, and I can indeed see my new changes.
Can I trust that a restart will always be sufficient, and that a cache.flush is not needed, or do I need to run the flush command as well? I'm reading that a restart of apache is required to clear the memory cache, but not how the file cache and/or cache.flush fits in with that.
Update:
I pulled the pagespeed code, and if I'm understanding correctly, the cache.flush process updates a timestamp.
It looks like that's happening in RewriteOptions::UpdateCacheInvalidationTimestampMs here:
http://modpagespeed.googlecode.com/svn/trunk/src/net/instaweb/rewriter/rewrite_options.cc
If I could figure out which timestamp this was updating, it seems like I could either check it/restart apache/check it again (to see if the timestamp changed) or deduce from the filename/location/who owns it somehow whether or not that's likely to happen.
Any more thoughts on this? Advice on how to figure out which timestamp is being updated? Other reasoning to make me feel better about either manually doing the extra flush command every time I update (when I'm already restarting apache for other reasons) or leaving it out?
touch the cache.flush file:
sudo touch /var/cache/mod_pagespeed/cache.flush
Reference: https://developers.google.com/speed/pagespeed/module/system#flush_cache
No restart of Apache doesnt clear the pagespeed cache. You have to do it manually by using cache.flush.
What I like to do to ensure that the whole cache on the entire web portion of the server
Apache2, this is a dry run, remove "-D" if you are sure you want to go through with it -l is for size of memory -p for path:
htcacheclean -D -p/var/cache/apache2 -l100M
mod_pagespeed:
sudo touch /var/cache/mod_pagespeed/cache.flush
A restart of Apache should flush the cache.
Related
I've been trying to configure my Apache Httpd 2.4 server to use mod_cache and mod_cache_disk to do caching for a Wordpress site. It should have a big improvement of performance. I have followed all the guides and done everything right, and yet... no files get saved in the cache.
I enabled logging of cache errors and I see this error coming up: "cache miss: cache unwilling to store response". Ok... I looked in the source code of mod_cache.c and it looks like that happens when cache_create_entity() returns anything other than OK. So I looked at the souce of mod_ccache_disk.c and I can see that in create_entity(), it silently fails if conf->cache_root == NULL. In all other cases, it would log an error or return OK.
I can only assume that it's failing because conf->cache_root is null. Where does that come from? It would come from ap_get_module_config to get the config of the cache_disk_module.
How can that possibly be returning null? And more annoyingly, why does the Apache server give some log message when there's a major error condition, such as loading a module but not having a config object for it?
Really stumped on this one... Thank you.
Problem solved: SELinux was blocking files from being created in the CacheRoot I specified. I had to put it in the right directory that had permissions, or I could have added permission to the directory.
I have a server that's running Ubuntu 16.04. and apparently Easy Hosting Control Panel keeps on creating multiple back-ups like 50 times a day which fills the 50 gb disk space and it's causing the server to crash.
The backup is creating multiple directories named Apache2.backupbyehcp inside /etc directory.
I've tried deleting the backups one by one and after a day there it is again.
I want to disable or limit the backups created.
Any help is greatly appreciated.
Here's a screen shot of the backup directories that are being created:
This is caused by:
Ehcp trying to recover webserver config, each time it detects that the webserver config is broken or webserver not responding.
This may result in such unexpected/unwanted behaviour.
What to do:
1st, check the problem in webserver configs, like, tail -f /var/log/ehcp.log
so that you can understand what is going wrong.
This is sometime caused by incorrect webserver custom configurations by admin or reseller. You may disable custom webserver configs via ehcp gui-> options.
(I strongly suggest finding the cause of this.)
If everything regarding the webserver is okay, but you just need to disable this backup,
open install_lib.php in ehcp dir, search for backupbyehcp and disable that line.
Hope this helps.
I am using Apache httpd on localhost to connect to the project I'm working on, I have been deleting web cache when website is showing the old project instead of the new one. However recently the project is stuck on an outdated version and restarting/clearing cookies won't work with the user I'm assigned.
I tried using another user with my project and it works just fine. I recently did use svn revert which may have caused my user to get stuck with old project, but I'm not sure that's the problem.
Commands I use to start and stop:
sudo service httpd start
sudo service httpd stop
Any advice or tips are greatly appreciated.
Update: I managed to get the page to update the html, however my JavaScript won't load which I suspect is due to another user being stored and thus not accessing the database I use which leads to nothing being generated.
(Posted on behalf of the OP).
I got everything to "work" again by accident, I made some edits to PHP and my PHP crashed because of a syntax error. When I fixed it back to original stage it was working again. It most likely is related to a user the previous owner created that caused the problem.
Just clear Temporary files
In windows CTRL+R and type %temp% and delete all the files it will help to you
I have an application with some cacheing backend and I want to clear the cacheing whenever the webserver is been restarted.
Is there a apache configuration directive or any other way to execute a shell script upon webserver (re)start?
Thanks,
Phil
Adding some more information, as asked by some answers already:
Base system is ofc linux based, in this exact situation: CentOs
Modifying the startup script is unfortunately no option as pointed out by one of the comments already, due to it beeing not configuration file within the respective RPM packages and therefor beeing replaced by updates. Also I think modifying the startup script would be a bad thing in general
I see, that actually linking both "restarting the webserver" and "clearing my app cache" is not exactly what should be tied together. I will consider other alternatives
My situation is as follows: I can define how the virtual host config looks like, but I can not define how the rest of the servers configuration looks like.
The application is actually PHP based (and runs on the symfony framework). Symfony pre-compiles alot of stuff into dynamic php files from what it finds in the static configuration files. We deploy our apps via RPM and after deployment, an webserver restart is actually initiated already, so I thought it might make sense to tie the cache-cleanup to it. But I think after getting all your feedback, it looks like it is better to put the cache cleanup process into the installation process itself.
You haven't provided a lot of detail here, so it's hard to give a concrete answer, but I would suggest that your best option is to write a script which handles restarting apache, and clearing your cache. It would look something like this:
#!/bin/sh
# restart apache
/etc/init.d/httpd graceful
# whatever needs to be done to clear cache
rm -rf /my/cache/dir
Ramy suggests modifying the system startup script for Apache -- this is a bad idea! If and when you update Apache on your server, there is a good chance that your change will be lost.
Dirk suggests that what you are trying to do is probably misguided, and I think he's right. You haven't told us what platform you are running, but I can think of few situations where restarting your webserver and clearing a cache actually need to happen together.
You can modify Startup script for the Apache Web Server in /etc/init.d/httpd and write your own syntax inside it.
chattr +i /etc/init.d/httpd
If you have (root) access to the server you could do this by shell scripts but I would consider if it is the best way of cache management to rely on apache restarts.
we are looking for a way to point our Apache DocumentRoot to a symlink.
E.g. DocumentRoot /var/www/html/finalbuild
finalbuild should point to a folder somewhere like /home/user/build3
when we move a new build to /home/user/build4 we want to use a shell script that changes the symbolic link "finalebuild" to this new directory /home/user/build4 and do an apache graceful restart to have a new web application version up and running with little risk.
What's the best way to create this symlink and to change this link afterwards using the shell script?
We're using capistrano to employ a similar setup. However, we've run into a few problems:
After switching to the setup, things appeared to be going fine, but then we started noticing that after running cap deploy, even though the symlink had been changed to point toward the head revision, the browser would still show the old pages, even after multiple refreshes and appending different GET parameters.
At first, we thought it was browser caching, so for development we disabled browser caching via HTTP headers, but this didn't change anything. I then checked to make sure we weren't doing full-page caching server-side, and we weren't. But I then noticed that if I deleted a file in the revision the symlink used to point to, we would get a 404, so Apache was serving up new pages, but it was still following the "old symlink" and serving the pages up from the wrong directory.
This is on shared hosting, so I wasn't able to restart Apache. So I tried deleting the symlink and creating a new one each time. This seemed to work sometimes, but not reliably. It worked probably 25~50% of the time.
Eventually, I found that if I:
removed the existing symlink (deleting it or renaming it);
made a page request, causing Apache to attempt to resolve the symlink but find it missing (resulting in a 404)
then created a new symlink to the new directory
it would cause the docroot to be updated properly most of the time. However, even this isn't perfect, and about 2-5% of the time, when the deploy script ran wget to fetch a page right after renaming the old symlink, it would return the old page rather than a 404.
It seems like Apache is either caching the filesystem, or perhaps the mv command only changed the filesystem in memory while Apache was reading from the filesystem on disk (doesn't really make any sense). In either case, I've taken up someone's recommendation to run sync after the symlink changes, which should get the filesystem on disk in sync with memory, and perhaps the slight delay will also help the wget to return a 404.
I've used symlinks as the apache DocumentRoot in production with no graceful restart necessary. In general, the idea should work. A 403 error probably indicates a permissions error unrelated to the symlink changing. An added wrinkle that you would want to add is making the symlink switch atomic so the symlink always exists. That is to say, at no time is the symlink nonexistent, even for a moment.
The solution to this problem is to effect the change by creating a new symlink and then renaming it over the old symlink. On Unix-like systems, renaming is an atomic operation, and thus the symlink “change” will be atomic too. By hand, the process looks like this:
$ ln -s new current_tmp && mv -Tf current_tmp current