Why does Apache return 403 - apache

Why can't I see why Apache returns 403?!
If I look in the access log the only information I get is
193.162.142.166 - - [29/Jan/2014:18:34:26 +0100] "POST /api_test/callback.php HTTP/1.1" 403 2293
How can I get more information about why the request is forbidden/rejected?
The call is made from a payment gateway...
If the callback URL is a http request there are no problems and returns 200 OK
If the callback URL is a https my server returns 403.. I need to know why?
The server has SSL and openSSL installed and it works!
Have tried to do the https request from http://web-sniffer.net/ and then there are no problems..
I don't get it.. There must be something in the request headers from the payment gateway which results in 403
update
error log
[Wed Jan 29 20:45:55 2014] [error] No hostname was provided via SNI for a name based virtual host
solution
Ok it looks like the client doesn't support SNI
http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI

Use the LogLevel directive to adjust how verbose the error logs are and increase until you can see what you want.
httpd 2.4 has better messages in a lot of respect and expensive list of LogLevel settings than 2.2. So if you're using 2.2 it may be a bit harder to figure this out.

Related

Use apache http server reverse-proxy to send a request expected not to return a response

Need somebody to push me in the right direction.
We're using apache http server (http1) reverse-proxy to send a request to another http server (http2). The challenge is http2 is not expected to send an HTML page in the response back to http1.
The http2 log does show the request coming in. However, the http1 log results in HTTP 502 error:
Internal error (specific information not available): [client ] AH01102: error reading status line from remote server localhost:9001
[proxy:error] [client ] AH00898: Error reading from remote server returned by /app/myContext/LogMessage
Here's http2 log which returns HTTP status 200:
"GET /app/myContext/LogMessage HTTP/1.1" 200 -
Please note that those requests that result in an HTML page work fine.
What would you think should be an approach here? Maybe using reverse proxy is not a good choice for this type of request?
We have httpd.conf on http1 set up this way:
ProxyPass "/app/myContext/"
http://localhost:9001/app/myContext/"
ProxyPassReverse "/app/myContext/"
http://localhost:9001/app/myContext/"
Disable ErrorLog on http1 altogether:
ErrorLog /dev/null
Have you tried to have http1 ignore using mod_log_config? According to the example the format string might be:
CustomLog expr=%{REQUEST_STATUS} -eq 502 && %{REQUEST_URI} =~ /app\/myContext/ ...
Or the LogFormat string might work too:
LogFormat %!502...
(h/t to Avoid logging of certain missing files into the Apache2 error log)
Is your problem that http1 is emitting 502 to the requestor? In that case, maybe use an <If> and a custom ErrorDocument?
<If %{REQUEST_URI} =~ /app\/myContext/>ErrorDocument 502 'OK'</If>
Went with the following solution: In http2 re-route the LogMessage call to fetch a blank html page:
1. Create blankfile.html in the /htdocs directory.
2. In httpd.conf add this line:
RewriteRule ^.(app/myContext/LogMessage). /blankfile.html [L]
This works for us since the whole purpose of LogMessage is to log the request in http2 access_log.
Just'd like to thank you #cowbert for working so deligently with me on this!

Apache with Kerberos (mod_auth_kerb) - dealing with unauthorized access & 401 log clutter

I have set up an Apache server to use mod_auth_kerb. It authenticates users via Kerberos and the Negotiate protocol, allowing them entry to the site if they hold a valid Kerberos ticket. It works in that it properly authenticates users. There is a problem however: HTTP 401 response codes clutter the Apache log file. They're from the same IP address each time, so I know that a client attempts to access the page, receives a 401, then tries again and gets an HTTP 200 OK back on the second try. It looks like the user is unidentified in the first attempt, but identified properly in the second attempt.
1.2.3.4 - - [07/Jan/2014:12:29:16 -0500] "GET /my_url/ HTTP/1.1" 401 1005
1.2.3.4 - user#REALM.EXAMPLE.COM [07/Jan/2014:12:29:16 -0500] "GET /my_url/ HTTP/1.1" 200 1724
How can I find out what is causing these 401 unauthorized responses? I can't record it over Wireshark because the connection is encrypted with HTTPS and TLS. Chrome's Developer Tools is only showing HTTP 200 OK responses, but I know that 401s are being generated from the Apache server logs. Any thoughts?
This is how HTTP Authentication works.
There is nothing you can do about it.

Bots throws 500 error in apache access log

In my Apache error log I can see the following errors has caught on enormous amount everyday.
[Tue Jan 15 13:37:39 2013] [error] [client 66.249.78.53] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
When I check the corroesponding IP, Date and Time with the access log I can see the following
66.249.78.53 - - [15/Jan/2013:13:37:39 +0000] "GET /robots.txt HTTP/1.1" 500 821 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
I've tested my robot.txt file in the Google Webmster tool -> Health -> Blocked URLs and it's fine.
Also when some images accessed by bot's it throw the following error,
Error_LOG
[Tue Jan 15 12:14:16 2013] [error] [client 66.249.78.15] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
Accessed_URL
66.249.78.15 - - [15/Jan/2013:12:14:16 +0000] "GET /userfiles_generic_imagebank/1335441506.jpg?1 HTTP/1.1" 500 821 "-" "Googlebot-Image/1.0"
Actually the above image URL (and several other images in our access log) are not available on our site (they were available before a website revamp that we did in August 2012), and we thrown 404 errors when we go to those invalid resources.
However once in a while, it seems that bots (and even human visitors) generate this type of error in our access/error log, only for static resources like images that don't exist, and our robots.txt file. The server throws a 500 error for them, but actually when I try it from my browser - the images are 404 and the robots.txt is 200 (success).
We are not sure why this is happening and howcome a valid robot.txt and inavalid image can throw a 500 error. We do have a .htaccess file and we are sure that our (Zend framework) application is not being reached, because we have a separate log for that. Therefore, the server itself (or.htaccess) is throwing the 500 error "once in a while" and I can't imagine why. Could it be due to too many requests to the server, or how can I debug this further?
Note that we only noticed these errors after our design revamp, but the web server itself stayed the same
It might be useful to log the domain that the client is accessing. Your server might be accessible via multiple domains, including the raw IP address. When you're testing, you're doing so via the primary domain and everything works as expected. What if you try to access the same files via your IP (http://1.2.3.4/robots.txt) vs. the domain (http://example.com/robots.txt)? Also example.com vs. www.example.com or any other variation that points to the server.
Bots can sometimes hold on to IP/domain info long after an address has changed and may be attempting to access something that the rules were changed for months ago.

Apache multipart POST "pass request body failed"

We are having problems with our web server (which is configured ssl -> apache -> jetty) randomly rejecting multipart upload POST requests with a 400 Bad Request error code. The apache error log (on info level) shows the following two errors:
[info] [client x1.y1.z1.w1] (70007)The timeout specified has expired: SSL input filter read failed.
[error] proxy: pass request body failed to x.y.z.w:8087 from x1.y1.z1.w1
[info] [client x1.y1.z1.w1] Connection closed to child 74 with standard shutdown
or
[info] [client x2.y2.z2.w2] (70014)End of file found: SSL input filter read failed.
[error] proxy: pass request body failed to x.y.z.w:8087 from x2.y2.z2.w2
[info] [client x2.y2.z2.w2] Connection closed to child 209 with standard shutdown
both cases result from the client side in a 400 Bad Request. Sometimes our jetty server doesn't even see the request meaning that it gets rejected on apaches side, sometimes it starts processing it only to be rejected (this manifests itself as a MultipartException in our UploadFilter)
We have mod_proxy setup to use a fallback load balancing scheme but the logs show that a fallback has not yet been triggered causing me to believe this is not the cause of the problem.
I tried setting SetEnv proxy-sendcl 1 but that didn't change anything.
The upload requests are arount 1mb. Only these multipart file POST requests fail, we have multiple GET requests comming in at the same time and they always work as expected.
If anyone can share any advice or suggestions I would be very grateful! Thank you
If you are using some ajp-enabled backend server, like Tomcat, you may try using mod_proxy_ajp instead of mod_proxy_http. I had a similar problem on a heavy upload app and I fixed it by changing
ProxyPass /myapp http://localhost:8080/myapp
ProxyPassReverse /myapp http://localhost:8080/myapp
by
ProxyPass /myapp ajp://localhost:8009/myapp
ProxyPassReverse /myapp ajp://localhost:8009/myapp
It's also required to enable the ajp connector on tomcat side, of course.
Hope it helps!
Please check this one: https://issues.apache.org/bugzilla/show_bug.cgi?id=44592
The problem could be caused by the HTTP keepalive (KeepAliveTimeout directive) that is killing an established connection with a slow client (tcp latency, slow request body creation, etc).
Try to raise the KeepAliveTimeout in your apache conf, but don't keep it too high to avoid killin' your server (DOS).
You may be seeing this due to timeouts resulting from Apache trying to buffer the entire POST body before passing it through.
Enabling proxy-sendcl may exacerbate this, since this can force Apache to spool a large POST to disk just to calculate the Content-Length when "the original body was sent with chunked encoding (and is large)".
To avoid this, set the environment variable proxy-sendchunked.
After fixing the main problem with the upload, I was still getting some weird logs like:
[reqtimeout:info] [pid 18164:tid 140462752990976] [client 201.76.162.37:41473] AH01382: Request header read timeout
I was able to reduce drastically the frequency it coccurs by increasing the limits of mod_reqtimeout, changing the values of RequestReadTimeout parameter. You can change the default values or redeclare this parameter on your VirtualHost.

Apache: multiple ../ in query string = internal server error (error 500)

here's the problem: when requesting url like - http://server/path/to/file.html?param=../../something/something i get response:
500 Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
...
Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.
log says:
xxx.xxx.xxx.xxx - - [05/Mar/2010:13:43:29 -0500] "GET /path/to/file.html?param=../../something/something HTTP/1.1" 404 - "-" ...
if i remove one instance of '../' in query string (request http://server/path/to/file.html?param=../something/something ), i get the reqested page. it gives error only on two or more '../'s.
this is on some hosting server, and the same thing gives no error on my local servers (LAMP, WAMP). i suppose it's about apache configuration, but i don't know what options to check.
Apache2.2.14 (Unix) is in question, PHP is installed (but it clearly doesn't have anything to do with PHP when i'm requesting plain ol' HTML file), mod_rewrite rules are disabled (no .htaccess files in requested file's path).
any ideas on how to succeed passing multiple '../'s in query string?
turned out to be security precaution enabled by default by hosting provider - not allowing 'backpaths', but i'm not sure which one, and where it's set.