Apache - Large zip file transfer corruption - apache

I have an Apache server running on Ubuntu hosting some files available for download. The files hosted is a mounted nas drive.
I am finding that when I try downloading, via the web server, large zip files (.zip, .7z) of 100MB+ the transferred file is corrupted. The method I am using to check the files is performing a MD5 calculation. I am also finding that the file size correlates with the chance of corruption; bigger file, high chance of corruption. The mount seems to be fine, because I transferred files from NAS to the machine without any issues.
I also have IIS running on windows hosting the same files. When I download the files via this web server there is never a corruption. This makes me think that the network itself is fine.
I am downloading the files via Chrome.
I'm not sure what is wrong but I am lead to believe it has to do with some configuration with Apache. How can I increase my file transfer reliability on Apache? Or is there another possible cause of issue?

It was an Apache configuration issue.
Found the solution in this article
Adding EnableSendfile On to the apache2.conf file fixed the corruption issue with large zip files. Apache 2.4 has this configuration default off while Apache 2.2 default its to on.

Related

Guacamole SFTP not working for larger files

I am using guacamole to connect to remote devices over RDP for Windows machines and SSH for Linux. Now I would like to enable SFTP support for the connections so I enabled the option 'Enable SFTP' in the guacamole connection settings.
The problem is SFTP is working for smaller files (<3KB), creates 0KB files for slightly larger files (3KB-150KB) and raises internal error for larger files (>150KB). I checked for what file size SFTP is failing by trial, transferring files of different sizes to the remote machine.
In the screenshot, it can be seen that 'attendance.py' a smaller file of size 548 bytes is successfully transferred to the tmp folder in the Linux machine, but the other two files files are created as empty files. The pdf file I tried to move is close to 180KB, which raises a Internal Error. I checked if there is some dependency with this error and filetype but this problem occurs for all file formats. I have the same problem when transferring file to a windows machine configured with RDP protocol in the same guacamole server.
Can someone help me with this? Thanks in advance
Are you using a reverse Proxy?
I had the same problem by using nginx. It seems it is by default not allowing files greater than 1MB.
I could change that at nginx to any size and now it works.
For nginx look for: client_max_body_size
If you are not using nginx, i would take a look at the webserver config. Remember, you using some sort of a webserver and a filelimit is there usualy very much needed.

(MacOS Server) Apache File Extension Questions

I am running into some sort of issue when trying to access my local website:
Forbidden
You don't have permission to access /index.html on this server.
Apache Server at ffghost.local Port 34580
I'm using macOS X Server 5.2 with Apache 2.4.18. OS X Server automatically creates two default websites (one on port 80 and one on port 443). I created a new website. It was my understanding that Apache would redirect from the default site to the created site automatically once created. This didn't happen. So, in an attempt to begin de-conflicting I replaced the files where the default site was located with the new website files and all of the sudden am getting the above 404 message.
I have read a lot of possibilities as to why this may be happening. I've run a syntax checker for Apache in terminal and terminal says syntax is ok. So from there I was going to check into the config files, but there are several, and I just want to know the gist behind them.
There seem to be about 4 file extension types. I don't know what they all mean or if they are active.
.config (I'm assuming this is the active file)
.config.prev (I'm assuming this is a previous version or copy of an active config file and is no longer active)
.config.orig (original file? and is no longer active)
.config.default (???)
Also, OS X Server and Apache seem to have the same files in two different places and I'm a little confused on which one to change. If I change one of them will it be reflected in the other? Do I need to change both of them? Additionally, I don't have DNS set up and am unsure if that was the original issue of not pulling up the new website over the default site.
You are mixing several aspects in your question which makes it complicated to give a helpful answer. For example, you say you get Forbidden when accessing your site, but later you mention a status 404. The former might be due to configuring a user group being allowed to access the site, while the latter just means Not found.
As to your actual question about the config files:
The file just ending in .conf is the one that is being used.
However, the Server app uses a lot of of different config files which might be relevant:
Path /Library/Server/Web/Config/apache2 contains the general config files
httpd.conf - general Apache configuration
httpd_server_app.conf - more general configuration
the other files contain configurations for specific applications or webapps (the latter being defined in plist files in /Library/Server/Web/Config/apache2/webapps)
Path /Library/Server/Web/Config/apache2/sites contains config files specific to your websites. They are named something like 0000_127.0.0.1_34543_your.domain.name.conf where 34543 is the configuration for the https (SSL) port, while 35480 would indicate the http port. There is also a file like 0000_127.0.0.1_34543_.conf (no domain name in the file name) which defines the default site.
In addition to these, there are two more configuration file in /Library/Server/Web/Config/proxy which configure the proxy services.
It is not recommended to manually adjust the config files, except for those in the sites subdirectory, because they may get overwritten by the Server app or when updating the Server app.
Important: If you change the files manually, you must re-start the Apache server in order to make the changes effective. Use sudo serveradmin stop/start web to do so.
However, I do not know of a detailed documentation of of all these files, so I try to stay on the safe side and possibly not edit the general config files (only those in sites). I also recommend to write down any manual changes, so they can be reapplied if necessary.
Without exactly knowing what you configured in the Server app and which files you changed how, I'm afraid it is impossible to say what might have gone wrong. I recommend to start all over by removing and re-adding the web sites.

Apache static data file caching

I have a VPS with Centos 7 and Apache 2.4. This server acts as a backend data source for a mobile app. Periodically new data files with unique file names are generated, after which they are never changed. I am looking for the best way to get Apache to cache these data files in memory without restarting the server each time a datafile is generated. Thank you in advance for your help.
In apache 2.4 you could use the mod_cache module. More infos about this on the official apache website:
https://httpd.apache.org/docs/2.4/caching.html
Not sure I understand the "without restarting the server each time a data file is generated" part!

Nginx is Slower than Apache downloading main.bundle.js

I have an Angular2 app that I've been developing for a bit now. Locally I run an Nginx server but the deployment server is using Apache. To unify things I worked to move the deployment server to Nginx but I am getting extremely slow results with Nginx.
Apache loads in ~5 seconds (1.1MB transferred)
Nginx loads in 16-20 seconds (5MB transferred)
These are both on the same server pointing to the exact same directory. The actual size of main.bundle.js is 4470365 main.bundle.js so it seems Nginx is loading the entire file.
How is Apache able to download only 737K?
You can check for the features enabled in both the files with nginx and apache by clicking on the exact file in Inspect element Network Tab. Then go to Headers and then Response Headers as illustrated in the attached image.
Check if the gzip compression is enabled in any one of the server. That is the only reason for lesser file size.

Smart local copy of a remote directory

Currently I have a bunch of local copies of dev/production websites. Each copy contains the "files" directory, which contains files uploaded by site users. Currently I use rsync to synchronize the directories contents from remote servers (via ssh).
There are some annoyances:
I have to run rsync manually each time when I want fresh files (this could be automated of course, but as I have a lot of website copies, it's not a good idea).
The rsync execution takes some time.
Disc space on my laptop is running out.
I think all of this could be solved if there is some kind of a software that can work like a proxy:
When I list files, it requests the file list from the remote server and caches the results for some (configurable) time.
When I first time request file contents, it retrieves the remote file and saves it locally.
When I update a file, it only gets updated locally.
When I save a new file in the "files" directory, it not goes to the remote server.
Of course, the logic of such software should be much more complex, but I hope, my idea is clear: don't waste disk space, download files on demand, no remote changes.
Is there any software that works like that?
Map a network drive with NFS or sshfs. Make local copies if you really need a file.
I did not mention it in the question, but I needed this for work with Drupal. And now I have found a Drupal-only solution, the Stage File Proxy module.
It does exactly what I need: downloads files from a remote server only when they are requested.