Can I set up an IIS server so that it will cache the most frequently used static files (binary) from disk into RAM, and serve from RAM on request?
Update: mod_mem_cache in Apache Caching Guide seems to be what I'm looking for. Any equivalent thing in IIS?
Thanks.
Even if IIS isn't actually set up to perform caching on its own, for true static files that are only loaded from disk and sent over the wire (i.e. images, .css, .js), you'll likely end up using the in-memory file cache built into Windows itself. In Task Manager, you'll notice a "System Cache" metric in the Physical Memory section; that shows you how much space the OS is using for the cache. So, as long as you're talking true static files, adding explicit caching is unnecessary.
Edit:
For more details, here's a couple links about the Windows cache (you probably could find more with Google):
http://msdn.microsoft.com/en-us/library/aa364218(VS.85).aspx
http://support.microsoft.com/kb/895932
Here's a bit on IIS 6.0's file cache: http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/a0483502-c6da-486a-917a-586c463b7ed6.mspx?mfr=true. As David mentioned, IIS is likely doing this for you already.
IIS 7.0 Output Caching
IIS 6.0 file cache behavior is included in IIS 7.0 output caching. You can define your own rules if the default timeout seems too short. Kernel Caching takes advantage of OS caching.
IIS should be doing this already. In .Net this is what the output caching would do for you.
Set up a RAM Disk if you have lots of RAM
http://www.tweakxp.com/article37232.aspx links to a free one. Have your application copy the relevant files to that drive and set your wwwroot to point at that location.
This data is not safe between boots though.
Also I run a big IIS site and serve tons of static files. The windows file cache is fine and I get more problems on network latency. Time to first byte e.t.c. My disks are never bound. But ram disk will help if you have a known problem.
What Nate Bross said is probably the most reliable way to keep them in ram, assuming the RAM disk is dynamically created from a real disk somewhere at boot.
Additionally, you could set up an asp.net handler (*.ashx) for the files to use the cache built into ASP.Net. It would try to serve from the cache first and only load them if needed. This has the advantage of allowing you to easily expire the cache from the time to time if the file might occasionally change and allow IIS to re-claim that memory if it decides it needs it more for something else at the moment.
In ASP.NET:
Response.Cache.SetExpires(DateTime.Now.AddDays(1));
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetValidUntilExpires(true);
This is to say, not that an ASP Solution is the best, but rather that IIS obeys the caching directives, an may opt to cache in RAM.
However, I believe you can signal IIS to cache a single file in the IIS management snap-in, if ASP/ASP.NET is not an option, by setting the content expiration 1 or more days in the future.
You may be able to setup a ram drive, and then move your files there and setup IIS virtual directory there.
Related
I utilize the Apache VFS library to access files on a remote server. Some files are symbolic links and when we get the file size of these files, it comes back as 80 bytes. I need to get the actual file size. Any ideas on how to accomplish this?
Using commons-vfs2 version 2.1.
OS is Linux/Unix.
You did not say which protocol/provider you are using. However it most likely also does not matter: none of them implement symlink chasing as far as I know (besides local). You only get the size reported by the server for the actual directory entry.
VFS is a rather high level abstraction, if you want to commandeer a protocol client more specially, using commons-net or httpclient or whatever protocol you want to use gives you much more options.
I have an Apache server running on Ubuntu hosting some files available for download. The files hosted is a mounted nas drive.
I am finding that when I try downloading, via the web server, large zip files (.zip, .7z) of 100MB+ the transferred file is corrupted. The method I am using to check the files is performing a MD5 calculation. I am also finding that the file size correlates with the chance of corruption; bigger file, high chance of corruption. The mount seems to be fine, because I transferred files from NAS to the machine without any issues.
I also have IIS running on windows hosting the same files. When I download the files via this web server there is never a corruption. This makes me think that the network itself is fine.
I am downloading the files via Chrome.
I'm not sure what is wrong but I am lead to believe it has to do with some configuration with Apache. How can I increase my file transfer reliability on Apache? Or is there another possible cause of issue?
It was an Apache configuration issue.
Found the solution in this article
Adding EnableSendfile On to the apache2.conf file fixed the corruption issue with large zip files. Apache 2.4 has this configuration default off while Apache 2.2 default its to on.
I have a microservice, hosted in Service Fabric, that handles uploading files to blob storage. The microservice is implemented with Nancy and OWIN. When the request is over a certain size, something like a couple hundred KB maybe, the request gets written to disk in a temp directory. Occasionally these .tmp files fail to get cleaned up, and eat up the limited disk space on the SF Cluster VM.
I have not been able to find anything about requests automatically getting written to disk. And nothing in the code creates .tmp files. What could be generating these files: Service Fabric, Nancy, OWIN?
Nancy is doing this, it has something called "request stream switching" which, as you say, switches from a memory stream to a file based stream over a certain size to avoid being able to fill all the memory up by uploading a large (or neverending file).
They should get cleaned up after every request, I haven't see any reports of them not being for a long time (we've fixed bugs around this in the past), but if you want to disable it completely (and accept the potential issue above) you can use "StaticConfiguration.DisableRequestStreamSwitching" in your bootstrapper app startup to turn it off.
Is it possible that setting the IIS root to the same directory to the project root will cause a slow performance?
I have an ASP.NET Web Application that handles some SQL commands to GET/POST records on the local SQL database. Recently I have came up with an idea that I no longer have to start debugging each time to test the code by changing the root of IIS from the default (C:\inetpub\wwwroot) to the root of the web-application project folder.
However, after that, I have encountered a problem where some manipulation on the web GUI, especially which include POST requests get extremely slow. For example, adding a new document or rewriting an existing one on the database now take about a minute whereas they did less than 20 seconds. Also, it seems that repeating POST commands make themselves slower (restarting the computer reset the situation). So I guess some read/write process may leave garbage and it conflicts with other processes.
Could anyone suggest any core issue about this phenomenon? Also please let me kwno if my explanation isn't enough clear to show the problem.
I have encountered a problem where some manipulation >on the web GUI, especially which include POST requests >get extremely slow
Changing the root directory is very unlikely to cause this issue.Your application was already performing very slow(20 seconds also is slow).
So no phenomenon in my opinion,You have to debug your application to find out where the delay is.To find out the root cause,you can use any profiler like perfview or a tool like debugdiag.
In case of debugdiag,choose the second option in the above link to capture a memory dump.Once you have a memory dump,simply double click the dump file and debugdiag will do an automated analysis and tell you where the problem is in your application code. E.g it can tell you your DB call is taking time .If you are not able to find,please post the analysis result updated with the question
When using XAMPP (1.7.5 Beta) under Windows 7 (Ultimate, version 6.1, build 7600), it takes several seconds before pages actually show up. During these seconds, the browser shows "Waiting for site.localhost.com..." and Apache (httpd.exe, version 2.2.17) has 99% CPU load.
I have already tried to speed things up in several ways:
Uncommented "Win32DisableAcceptEx" in xampp\apache\conf\extra\httpd-mpm.conf
Uncommented "EnableMMAP Off" and "EnableSendfile Off" in xampp\apache\conf\httpd.conf
Disabled all firewall and antivirus software (Windows Defender/Windows Firewall, Norton AntiVirus).
In the hosts file, commented out "::1 localhost" and uncommented "127.0.0.1 localhost".
Executed (via cmd): netsh; interface; portproxy; add v6tov4 listenport=80 connectport=80.
Even disabled IPv6 completely, by following these instructions.
The only place where "HostnameLookups" is set, is in xampp\apache\conf\httpd-default.conf, to: Off.
Tried PHP in CGI mode by commenting out (in httpd-xampp.conf): LoadFile "C:/xampp/php/php5ts.dll" and LoadModule php5_module modules/php5apache2_2.dll.
None of these possible solutions had any noticeable effect on the speed. Does Apache have difficulty trying to find the destination host ('gethostbyname')? What else could I try to speed things up?
Read over Magento's Optimization White Paper, although it mentions enterprise the same methodologies will and should be applied. Magento is by no means simplistic and can be very resource intensive. Like some others mentioned I normally run within a Virtual Machine on a LAMP stack and have all my optimization's (both at server application levels and on a Magento level) preset on a base install of Magento. Running an Opcode cache like eAccelerator or APC can help improve load times. Keeping Magento's caching layers enabled can help as well but can cripple development if you forget its enabled during development, however there are lots of tools available that can clear this for you from a single command line or a tool like Alan Storms eCommerce Bug.
EDIT
Optimization Whitepaper link:
https://info2.magento.com/Optimizing_Magento_for_Peak_Performance.html
Also, with PHP7 now including OpCache, enabling it with default settings with date/time checks along with AOE_ClassPathCache can help disk I/O Performance.
If you are using an IDE with Class lookups, keeping a local copy of the code base you are working on can greatly speed up indexing in such IDEs like PHPStorm/NetBeans/etc. Atwix has a good article on Docker with Magento:
https://www.atwix.com/magento/docker-development-environment/
Some good tools for local Magento 1.x development:
https://github.com/magespecialist/mage-chrome-toolbar
https://github.com/EcomDev/EcomDev_LayoutCompiler.git
https://github.com/SchumacherFM/Magento-OpCache.git
https://github.com/netz98/n98-magerun
Use a connection profiler like Chrome's to see whether this is actually a lookup issue, or whether you are waiting for the site to return content. Since you tagged this question Magento, which is known for slowness before you optimize it, I'm guessing the latter.
Apache runs some very major sites on the internets, and they don't have several second delays, so the answer to your question about Apache is most likely no. Furthermore, DNS lookup happens between your browser and a DNS server, not the target host. Once the request is sent to the target host, you wait for a rendered response from it.
Take a look at the several questions about optimizing Magento sites on SO and you should get some ideas on how to speed your site up.