I have two servers. They are both running the same code and are connected to the same database. But when I make a certain ajax call, one server works fine, and the other throws an Internal Server Error. Inside the Apache log of the internal service error server, it says 'Premature end of script headers'. This makes me think that there is some Apache error on one machine, but the Apache folder looks identical on both machines.
What sort of differences between servers would cause one to throw this error?
The error tells that you that no headers were output where they should be output. Typically it happens with a cgi script that fails for some reason and hence does not output anything, or outputs stuff before outputting the headers.
To debug it: Look in the apache error log. It will most likely tell you what went wrong.
To find the error_log: Look for the ErrorLog directive in the apache configuration file.
Swa66 etc have provided good answers but I thought that I would add a few more things to look into after you you reviewed the apache error logs which should give you the text that was returned where Apache was expecting a valid http header.
You mentioned that the servers are responding to Ajax queries and so it is possible that you are having some kind of CORS issues or issues relating to how Apache is configured to allow HTTP headers to be passed through to the CGI process.
It is not uncommon to include JWT or other auth headers as part of an ajax service and you may find that you need to add either a .htaccess file permitting these to flow through
Check your working server to see if the headers module is enabled ( - something like this in you httpd.conf - LoadModule headers_module libexec/apache2/mod_headers.so )
Also look for the .htaccess or domain specific configuration of the header related options. If need be, have a dig around relating to Apache CORS configuration and header related issues.
You may need some kinds of configuration similar to the following.
.htaccess
SetEnvIf Authorization .+ HTTP_AUTHORIZATION=$0
Header set Access-Control-Allow-Origin "*"
It is a little difficult to be certain of your issue without more specific diagnostics. Perhaps you could provide debugger details from your browser describing the Ajax/XHR requests including the headers etc and the relevant error log entries from apache. Assuming that you are using Perl/CGI perhaps snippets describing the CGI configuration from the config and any .htaccess files on the working and errant server.
You may also want to do a quick check of the permissions of the executables and try to run them from the command line.
It could be a very basic configuration option such as described at apache centos fails to serve images - premature end of script headers .. we really need a little more diagnostic detail to focus on possible causes.
Related
I am doing some reverse engineering on a website.
We are using LAMP stack under CENTOS 5, without any commercial/open source framework (symfony, laravel, etc). Just plain PHP with an in-house framework.
I wonder if there is any way to know which files in the server have been used to produce a request.
For example, let's say I am requesting http://myserver.com/index.php.
Let's assume that 'index.php' calls other PHP scripts (e.g. to connect to the database and retrieve some info), it also includes a couple of other html files, etc
How can I get the list of those accessed files?
I already tried to enable the server-status directive in apache, and although it is working I can't get what I want (I also passed the 'refresh' parameter)
I also used lsof -c httpd, as suggested in other forums, but it is producing a very big output and I can't find what I'm looking for.
I also read the apache logs, but I am only getting the requests that the server handled.
Some other users suggested to add the PHP directives like 'self', but that means I need to know which files I need to modify to include that directive beforehand (which I don't) and which is precisely what I am trying to find out.
Is that actually possible to trace the internal activity of the server and get those file names and locations?
Regards.
Not that I tried this, but it looks like mod_log_config is the answer to my own question
I am trying to grab json data from monit and display it on a status page for management to see the current status of a handful of processes. This info would be displayed in Confluence running on the same machine but since Confluence (apache) and monit are running on different ports it is considered to be cross domain.
I know I can write a server side process to serve this data but that seems to be overkill and would actually take longer that it took to set monit up in the first place :)
The simplest solution is to configure monit's headers (Access-Control-Allow-Origin) to allow the other server. Does anyone know how to do this? I suspect there is a way since M/Monit would run into the same issue. I have tried some blind attempts on the "httpd... allow" lines but it complains about the syntax with x.x.x.x:port or using keyword "port" in that location.
ok... going to answer my own question (sort of).
First, I think I may have asked the question wrong. I don't deal with a lot of cross domain issues. Sorry about that.
But here is what I did to get to the monit info from the other servers: pretty simple using proxies in apache where the main server is:
ProxyPass /monit http://localhost:2812
ProxyPassReverse /monit http://mainserver/monit
ProxyPass /monit2 http://server2:2812
ProxyPassReverse /monit2 http://mainserver/monit2
I did this for each of the servers and tested that I can get to either the monit web interface or to the _status?format=json sub pages. I can now call them using ajax on our main web page.
This also has the benefit that I can lock down the monit access control to just the main server but have the info show on a more visible page. :)
I don't think you would need a proxy to just display monit's api or http info. It depends on how you have your network and dns configured. If you'd like to use only localhost, then that might be necessary. But, monit does have a facility to use global host ip access using allow directives in it's own config rc file
I want to you Apache 2.2 httpd to SSI include URLs using
<!--#include virtual="/content/foo.html" -->
My problem is if, the SSI included page doesnt exist on my App server, it responds with a 404 response and a default error page HTML, which is then stitched into my page via the include.
For failing (4xx,5xx) SSI includes I simply want the SSI include to add the empty string to my page.
It doesn't appear Apache 2.2. supports the 'onerror' directive (which I think would solve this) - and i dont see any other options.
http://httpd.apache.org/docs/2.2/mod/mod_include.html
You could potentially add a rewrite to handle those portions of your application's URI space, but I'd advise against it. The approach being investigated seems to not fix the main problem: the concept of SSIs hinges on the files being included should be consistently available. If the included files are returning 4xx or 5xx class errors, the onus is on you to fix these errors.
Within httpd.conf or vhosts file it’s possible to:
log all errors to a log file using http://httpd.apache.org/docs/1.3/mod/core.html#errorlog
forward a user on to a custom error page depending on what the error code is http://httpd.apache.org/docs/2.0/mod/core.html#errordocument
Are there any other alternatives to dealing with apache error? What i'd really like to do is somehow get hold of an apache error when it occurs in php.
(We're looking to build a tool which monitors errors that occur globally on our servers in a web based interface (using php). Currently I'm thinking of logging errors to a log file and then use php to monitor this file for changes and parse it in to something usable with PHP.)
You can use the ErrorDocument directive to redirect errors to your own PHP application that:
Displays an error message to the user, and
Performs any other work necessary to handle the error.
That is, the target of your ErrorDocument does not need to be simply a static error page; it can be any URL you want and you can take whatever action you would like in response to the request.
This will probably be more effective than scraping the log.
I have the following system configured:
Tomcat -> Apache
Now, I have some URLs on which I have Max-Age, LastModified and Etags set.
My expectation is when Client1 makes a call to the server, the page should get served from tomcat, but should get cached in the mod_cache module of Apache. So that when next client makes a call, the page is served from Apache and it doesnt have to hit the Tomcat server, if the page is still fresh. If the page isnt fresh, Apache should make a Conditional Get to validate the content it has.
Can someone tell me if there is any fundamental mistake in this thinking? It doesnt happen to work that way. In my case, when client2 makes a call, it goes straight to the Tomcat server(not even a Conditional Get).
Is my thinking incorrect or my Apache configuration incorrect?! Thanks
The "What can be cached" section of the docs has a good summary of factors - such as response codes, GET request, presence of Authorization header and so on - which permit caching.
Also, set the Apache LogLevel to debug in httpd.conf, and you will get a clear view of whether or not each request got cached. Check the error logs.
You should be able to trace through what is happening based on these two.