Is it possible to configure the Apache server in such a way that when downloading a file from a given folder, e.g. https://images.anywebsite.com/mpSTj8xbeu/1.jpg (but also 2.jpg, 3.jpg etc), added the entire folder to mem cache for a short period of time, e.g. 60 seconds, to download https://images.anywebsite.com/mpSTj8xbeu/4.jpg, https://images.anywebsite.com/mpSTj8xbeu/5.jpg etc. files from RAM, not HDD?
Each photo on the website is downloaded with a few seconds delay due to HTML lazy-loading.
So single request to a www.anywebsite.com makes around 10 requests to:
https://images.anywebsite.com/mpSTj8xbeu/*.jpg
each few seconds after previous one.
I was trying to find right apache module for this, without any luck.
Related
My website is consuming much more bandwidth than it supposed to be. From Weblizer or awstats of WHM/ cPanel I can monitor the bandwidth usage, which type of files (jpg, png, php, css etc.) is consuming the bandwidth. But I couldn't get any specific file name. My assumption is the bandwidth usage is done by referral spaming. But from the "Visitors" page of cPanel I can see only last 1000 hits. Is there any way from where I can see that which image or css file is consuming the bandwidth.
If there is a particular file which you think is consuming the most bandwidth, then you use apachetop tool.
yum install apachetop
then run
apachetop -f /var/log/apache2/domlogs/website_name-ssl.log
replace website_name with which you wish too.
It will basically pick the entries from domlogs (which saves requests being served from websites, you may read more about domlogs here).
This will show the file which is being requested the most in real time basis and might give you an idea if particular image/php etc file has maximum requests.
Domlogs is a way to find which file request by which bot etc is being carried out. Your initial investigation may start from this point.
On my site I upload media files. My uploader chunks these uploads into smaller sized files and once the upload is completed, the original will be created by merging the chunks.
The issue I run into is that when Cloudflare is enabled, the chunking request takes an awful long amount of time. An example is displayed here: http://testnow.ga
Every uploaded 5mb it chunks the file. This process saves the downloaded file on the server, then sends an AJAX request to the client and another 5mb upload request starts. The waiting (TTFB) in this particular case ranges anywhere from 2-10 seconds. Now, when the chunk size is 50mb for example, the waiting can be up to two minutes.
How can I speed up this process with Cloudflare? How can I ignore that specific /upload URL to not talk to Cloudflare?
Ps: the reason I'm not asking at Cloudflare is because I did a week ago and again a few days ago and haven't gotten a response yet. Thanks!
One option is to use a subdomain to submit the data to. At cloudflare, greycloud that dns entry. Then data is sent directly to your server bypassing cloudflare.
Currently I have a bunch of local copies of dev/production websites. Each copy contains the "files" directory, which contains files uploaded by site users. Currently I use rsync to synchronize the directories contents from remote servers (via ssh).
There are some annoyances:
I have to run rsync manually each time when I want fresh files (this could be automated of course, but as I have a lot of website copies, it's not a good idea).
The rsync execution takes some time.
Disc space on my laptop is running out.
I think all of this could be solved if there is some kind of a software that can work like a proxy:
When I list files, it requests the file list from the remote server and caches the results for some (configurable) time.
When I first time request file contents, it retrieves the remote file and saves it locally.
When I update a file, it only gets updated locally.
When I save a new file in the "files" directory, it not goes to the remote server.
Of course, the logic of such software should be much more complex, but I hope, my idea is clear: don't waste disk space, download files on demand, no remote changes.
Is there any software that works like that?
Map a network drive with NFS or sshfs. Make local copies if you really need a file.
I did not mention it in the question, but I needed this for work with Drupal. And now I have found a Drupal-only solution, the Stage File Proxy module.
It does exactly what I need: downloads files from a remote server only when they are requested.
If the logs file like access.log or error.log gets very large, will the large-size problem impact the performance of Apache running or user accessing? From my understanding, Apache doesn't read entire logs into memory, but just make use of filehandle to write. Right? If so, I don't have to remove the logs manually every time when it's large enough except for the filesystem issue. Please help and correct me if I'm wrong. Or is there any Apache Log I/O issue I'm supposed to take care when running it?
Thx very much
Well, i totally agree with you. Per my understanding apache access the log files using handlers and just append the new message at the end of the file. That's way a huge log file will not make the difference when has to do with writing to the file. But may be if you want to access the file or open it with a kind of logging monitoring tool then the huge size will slowdown the process of reading the file.
So i would suggest you to use log rotation to have an overall better end result.
This suggestion is directly form the apche web site.
Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.
From the Apache Software Foundation site
I am working on a project that processes images, saves the processed images in a cache, and outputs the processed image to the client. Lets say that the project is located in /project/, the cache is located in /project/cache/, and the source images are located wherever else on the server (like in /images/ or /otherproject/images/). I can set up the cache to mirror the path to the source image (e.g. if the source image is /images/image.jpg, the cache for that image could be /project/cache/images/image.jpg), and the requests to the project are roughly /project/path/to/image (e.g. /project/images/image.jpg).
I would like to serve the images from the cache, if they exist, as efficiently as possible. However, I also want to be able to check to see if the source image has changed since the cached image was created. Ideally, this would all be done with mod_rewrite so PHP wouldn't need to be used to do any of the work.
Is this possible? What would the mod_rewrite rules need to be for this to work?
Alternatively, it seems like it would be a fine compromise to have mod_rewrite serve the cached file most of the time but send 1 out of X requests to the PHP script for files that are cached. Is this possible?
You cannot acces the file modification timestamp from the RewriteRule, so there is no way around using PHP or another programming language for that task.
On the other hand this is really simple in PHP, so you should first check whether the PHP solution is good enough in you case. Only if it isn't you should look for alternatives.
What if you used the client to do some of the work? Say you display an image in the web browser and always use src="/cache/images/foobar.jpg" and add an onerror="this.src='/images/foobar.jpg'". In mod_rewrite, send anything that goes to the /images/ dir to a script that will return and generate an image in the cache.