Apache2: upload restarts over and over - apache

We are using different upload scripts with Perl-Module CGI for our CMS and have not encountered such a problem for years.
Our customer's employees are not able to get a successful download.
No matter what kind or size of file, no matter which browser they use, no matter if they do it at work or log in from home.
If they try to use one of our system's upload pages the following happens:
The reload seems to work till approx. 94% are downloaded. Suddenly, the reload restarts and the same procedure happens over and over again.
A look in the error log shows this:
Apache2::RequestIO::read: (70007) The timeout specified has expired at (eval 207) line 5
The wierd thing is if i log in our customer's system using our VPN-Tunnel i never can reproduce the error (i can from home though).
I have googled without much success.
I checked the apache timeout setting which was at 300 seconds - which is more than generous.
I even checked the content length field for a value of 0 because i found a forum entry refering to a CGI bug that related to a content length field of 0.
Now, i am really stuck and running out of ideas.
Can you give me some new ones, please?
The apache server is version 2.2.16, the perl CGI module is version 3.43 .
We are using mod_perl.

We did know our customer didn't use any kind of load balancing.
Without letting anyone else know our customers infrastructure departement activated a load balancer. This way requests went to different servers and timed out.

Related

How to get log unique requests and check their status in Lucee

I am trying to log specific requests by users to determine if their Lucee request has completed, if it is still running, etc. The purpose of this is to fire of automated processes on demand and ensure to the end users that the processes is already started so they do not fire off a second process. I have found the HTTP_X_REQUEST_ID in other searches, but when dumping the CGI variables, it is not listed. I have set CGI variables to Writable rather than Read Only, but it is still not showing up. Is it something I must add in IIS, or a setting in Lucee Admin that I am overlooking. Is there a different way to go about doing this rather than HTTP_X_REQUEST_ID? Any help is appreciated.
Have yo consider using <cfflush>. When Lucee request start you can send partial information to the client informing that the process has started in the server.

Apache is Adding Javascript in HTML File

I got a strange situation where my Apache Server is adding a piece of Javascript code just before closing tag of the served HTML content.
I tried to find out but could not success what is going here on the server. I restarted the server and then it went away, but after sometime I'm facing same issue.
I'm sure my server is compromized and someone is doing this act. Kindly help me where to look to check how Apache can add such code on the fly on CentOS 7.
If you do not have set up anything like this, it's likely that your server got compromised.
As a first step, I suggest you check, if anything like this has been configured.
Beware though, if your server has been compromised, it's very likely that the attacker still maintains access to your server. If you can, nuke it, rotate credentials and look into hardening your servers.

Apache server "Error" message

I'm using an Apache webserver to develop a project using joomla.
Some days ago I did a backup restoration in Joomla (using Akeeba Backups) and now, whenever I try accessing, both the site's backend and frontend, I get a blank page with nothing but the word "Error".
I don't know if the backup is in anyway related to this error as I'm not that much experienced, but I haven't touched the server since then. Any help? How can I find the error's origin?

Apache Log issue on Webmin

I'm not sure when this issue first started but for at least 3 months I've not had any errors logging to the Apache log, despite knowing there are errors that should be going in there.
I'm using Webmin and I was going to set up a new log to store these errors but I'm not sure what I need to do in order to make it log Apache (PHP) errors.
I'm looking at the following site for instructions but it doesn't explicitly say anything about Apache errors and the last thing I want to do is mess with the server:
http://doxfer.webmin.com/Webmin/System_Logs

Crawler/Bot activity triggering deletion of critical website files - what code in Odoo v9 could be causing this deletion to occur due?

Context:
Odoo v9 docker image installed behind NginX reverse proxy, on a publicly facing bare domain (e.g. mydomain.com), website builder installed, not other configuration or apps.
Problem:
Periodically a critical file will go missing:
2015-10-30 15:28:28,266 1 INFO db-test werkzeug: 172.17.0.25 - - [30/Oct/2015 15:28:28] "GET /web/content/407-17599c5/website.assets_frontend.js HTTP/1.0" 200 -
2015-10-30 15:28:28,281 1 INFO db-test openerp.addons.base.ir.ir_attachment: _read_file reading /var/lib/odoo/filestore/db-test/e6/e69e06808b908fc0d85ebfea58fbc7df3788e72e
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/openerp/addons/base/ir/ir_attachment.py", line 151, in _file_read
r = open(full_path,'rb').read().encode('base64')
IOError: [Errno 2] No such file or directory: u'/var/lib/odoo/filestore/db-test/e6/e69e06808b908fc0d85ebfea58fbc7df3788e72e'
This file is an auto generated, compressed javascript file with all the common js assets for the website to function. Thus the site and app become unusable. Restoring the file fixes this problem. It is unclear if other files are going missing or not.
So Far:
It only happens when the domain is publicly facing and accessible
(when firewalled off to only serve me, this does not happen when
on a different non-indexed (e.g. by Google) domain this does not
happen.)
So far this does not happen with robots.txt set to
"Disallow: /" - it may take a bit longer to prove this is actually
preventing the issue, but this is a long while for the problem not to
have happened.
An initial manual crawl using wget does not trigger
this issue - though this was tested as a fresh recursive get of
current content on the domain that this issue occurs
I haven't done a recrawl, or requested out of date urls, so may not paint the full picture
For more lengthy background into the
investigation of this, see:
https://www.odoo.com/forum/help-1/question/updated-how-do-i-prevent-website-common-asset-files-from-constantly-not-being-found-ioerror-errno-2-no-such-file-or-directory-92982
Is this oddly due to the domain name being domain.tld rather than
www.domain.tld?
or is this a quirk of a bot/crawler that is triggering something it
shouldn't?
or is this a bug that doesn't handle requests for old/expired or
unknown urls well?
or a combination of the above?
or maybe even malicious activity?
At this point, it looks like it could be a very concerning security issue, that an external, anonymous (not logged in) user can trigger a catastrophic file deletion inside the odoo software. Given all the variables tested so far, this very much looks like the source of the issue. If it is, it would be a significant security flaw. Has anyone else upgraded to v9 experienced this problem? It is likely only going to happen to sites that are already established and indexed by Google etc.
Any help to properly identify and solve this issue would be appreciated.
In case anyone still needs to solve this, there was a bug in the odoo code that was fixed in a later release. Info here:
https://github.com/odoo/odoo/issues/9495
Because of some reason this record is deleted(currently not present) from location /var/lib/odoo/filestore/db-test/e6/ or path of your data_dir /filestore/database_name
I had Same Error, so I just delete that record from **ir_attachment** table. This is not a proper way but problem solved
There are two types of backups, one of them includes the database besides the dump into the .zip file.
This happens to me when I just do a backup & restore using the dump variant.
That way excludes the file store, so one could have ir.attachment records referring to inexistent files in the file store.