Configure Access-Control-Allow-Origin for monit - monit

I am trying to grab json data from monit and display it on a status page for management to see the current status of a handful of processes. This info would be displayed in Confluence running on the same machine but since Confluence (apache) and monit are running on different ports it is considered to be cross domain.
I know I can write a server side process to serve this data but that seems to be overkill and would actually take longer that it took to set monit up in the first place :)
The simplest solution is to configure monit's headers (Access-Control-Allow-Origin) to allow the other server. Does anyone know how to do this? I suspect there is a way since M/Monit would run into the same issue. I have tried some blind attempts on the "httpd... allow" lines but it complains about the syntax with x.x.x.x:port or using keyword "port" in that location.

ok... going to answer my own question (sort of).
First, I think I may have asked the question wrong. I don't deal with a lot of cross domain issues. Sorry about that.
But here is what I did to get to the monit info from the other servers: pretty simple using proxies in apache where the main server is:
ProxyPass /monit http://localhost:2812
ProxyPassReverse /monit http://mainserver/monit
ProxyPass /monit2 http://server2:2812
ProxyPassReverse /monit2 http://mainserver/monit2
I did this for each of the servers and tested that I can get to either the monit web interface or to the _status?format=json sub pages. I can now call them using ajax on our main web page.
This also has the benefit that I can lock down the monit access control to just the main server but have the info show on a more visible page. :)

I don't think you would need a proxy to just display monit's api or http info. It depends on how you have your network and dns configured. If you'd like to use only localhost, then that might be necessary. But, monit does have a facility to use global host ip access using allow directives in it's own config rc file

Related

Virtual server on virtualmin keeps redirecting to wrong website

I have created a virtual server say aaa.com but when I access the site (via editing my hosts file on Windows 7, cos I have a live aaa.com running on the Internet), it brings me to my other virtual server's site I have, like bbb.com
Why is that? I don't have any redirection running. Not in my script files (like html or php) and no redirection set under "Server Configurations" -> "Website Redirects" and none at "Services" -> "Click Configure Website" -> "Aliases and Redirects." The only script files I have are fresh new WordPress installation files (under home/aaa/public_html).
How do I fix this?
Mullazman is right (thanks!). I have just had this problem after enabling the SSL on the domain A. Then, all the domains in the same installation were pointing to A.
I fixed it by editing the file located in /etc/apache2/sites-enabled/A.conf and changing the first line:
Wrong line -> <VirtualHost A.B.C.D:80>
Correct line -> <VirtualHost *:80>
Had the same issue. For anyone interested it's because I had at the header of my sites-enables/aaa.com.conf which was picking up all requests and send them to the first host.
Change it to and it started directing traffic to the correct virtual hosts.
It was triggered when I enabled SSL on aaa.com, it for some reason re-wrote the config file to use IP based filtering not domain name
Try to delete browser cache with CTRL+F5,
then try again. If that doesn't help, check virtualhost configuration files -maybe there lies the problem.
The solution I found
I had the same problem ...
And I ended up with a lot of doubts ...
And I searched for a SOLUTION for this case, I hope to help ...
1 - Should the BIND have an external or internal IP in the domain? I use only one IP for all servers, and in BIND all domains are with external IP. (The question is whether it should be external or internal IP).
R = Yes, you must configure the internal IP in Virtualmin, prefer to edit the file. Only in localhost you should have 127.0.0.1
2 - Would NGINX have any configuration? How to remove IP and just put (listen *: 80) instead of (listen 288.218.198.981:80)
R = This configuration was changed but then I had problems with DNS and I returned to use the INTERNAL IP (not the localhost) ... Normally this IP starts as: 10.1xx.xx.xx
But which configuration would work in general?
Restart these steps ...
If you still have an error ...
Back up ... And in Virtualmin settings ...
Edit Virtual Server >> Activate Features >>
Uncheck NGINX, BIND, NGINX SSL.
He will ask for confirmation and click to confirm.
After this process is completed, return to the same option and reschedule ...
This will make it delete the old ones and put a new one.
(This works great for those who changed hosting and has old settings).
If you are importing a backup. Do not select the DNS and NGINX option ...
One tip is to create Virtual Server {your domain / site} First of all ...
And only then only import directories and databases ...
So you will not have problems with DNS and wrong redirects ...
Update
This also occurs when the SSL certificate is not issued correctly.
Folder permissions are incorrect.
Chmod 0755 folders
0644 Files
SOLUTION!!!
Cheap workaround let us say our domain is domain.xyz
Under the BindDNS Master Zone for domain.xyz create a cname record I believe it is listed in webmin as Name Alias and name it 000.domain.xyz
Under apache create a virtual server with the name 000.domain.xyz and make sure it has the same directory as domain.xyz
After this is done you are golden all your websites will come up as they should!
Is it proper well maybe not.
Does it work well like a charm of course otherwise I wouldn't be sharing for some reason the way the severs are listed it defaults to the first on the list well that'll fix that there should be a method of pinning the servers or doing something to prevent such a thing from happening what a pain in the rear I spent a full day dumbfounded thinking what in the world is going on I am losing my touch.
If this helps give a like if its wrong apologies all I know is that it works.
Read the thread.
Many folks claim this is an SSL thing.
Zero people have eluded to the true method of fixing it or the proper directions to do so or if they did I'm too blind to see it.
The guy below me commenting hrmmm... Yeah browser caches for my website didn't exist on my devices I tried them on to verify that was not the problem. But yes this is a typical problem with a lot of things indeed. It is the only reason I have several browsers on my PC actually for that reason. For a while there there were pages that chrome would function with that IE wouldn't or Firefox would best them both. Not to mention cache is always a pita its always usually one of my steps in troubleshooting any issues with web pages. I'll even try openDNS or other DNS servers.
But holy cats I can't believe how fast DNS just updates once you got things set it makes me wonder if there is a lot of fudge in propigation when you purchase hosting being "24 -48 hours" I think there is a lot of fudge in those numbers after my experiences trying to figure out what was causing the issue here. Some servers struggle yes but for the most part it was pretty instant for me.
In my case it happened after creating SSL certificate, I forgot to do:
Edit Virtual Server -> Enable Apache SSL Website

Bad Request Your browser sent a request that this server could not understand

Although I am by no means an expert on Ubuntu, I have had two servers running for a couple of years with no problems.
Last night, when attempting to access a local website on one of them I got the error:
**Bad Request**
Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request.
After several hours of frustration and no success, I rebuilt the server. While Ubuntu was installing I went to the other server and got the exact same error.
The first server has now been rebuilt and it displays the same error.
I have shut down every computer on the network. Powered down the router and started over.
In addition to the two servers, the network consists of three windows machines and a Ubuntu desktop.
I have tried isolating the machines from the Internet, I have tried both wired and wireless clients.
Going to localhost on the servers displays the Ubuntu Apache Default page.
The only thing that happened about the time the problem started was the Windows decided to shutdown this machine for an update. I don't see how this could have caused a problem but I have isolated this machine from the network and the problem exists.
I cleared cookies, used five different browsers, and they all report the same error. I'm about out of ideas, and looking for any suggestions.
In my case, it's actually the underscore _ in DocumentRoot that causes problem and hours of debugging. All work fine once I remove it from my DocumentRoot path.
This is due to the update RFC 3986, which claims that underscores are unsafe in virtual host servernames and other elements. In my case, I could not change the URL name, so I just allowed this underscores by enabling these unsafe urls. To do that, just add HttpProtocolOptions unsafe to the httpd.conf file.
https://httpd.apache.org/docs/2.4/mod/core.html#httpprotocoloptions
For newer httpd versions do not support Hyphens in the URL in that case you have to add HttpProtocolOptions unsafe to the httpd.conf file, it works fine after that.
Okay, I'm 99.99% sure I have found the problem with the help of Capsule. The key to success was changing Apache's error level to debug. This gave me a starting point and from there a bit of trial and error and all was well.
I had two servers B777 and B767 which I used for local development. for development websites I used something like www.something.767 and www.something.777. All of the sites were listed in a hosts file.
For several years this worked just fine. For some reason I may never understand, last evening, I began to get the aforementioned error on both servers and on another I built this evening.
It seems that the problem was using numbers in the domain name. As soon as I changed a domain name from www.something.767 to www.something.local (or apparently any other non numeric characters) everything was back to normal.
I had to remove the underscore (_) from the ServerName directive, as well as the hostname in /etc/hosts.
However, the underscore in DocumentRoot is just fine.
Thus, the relevant line in /etc/hosts looks like this:
127.0.0.1 mycoolsite.localhost
And the relevant block in /etc/apache2/extra/httpd-vhosts.conf looks like this:
<VirtualHost *:80>
DocumentRoot "/Users/satoshi/Sites/my_cool_site"
ServerName mycoolsite.localhost
ErrorLog "/private/var/log/apache2/my_cool_site.localhost-error_log"
CustomLog "/private/var/log/apache2/my_cool_site.localhost-access_log" common
</VirtualHost>
Don't forget to run apachectl restart after making your changes.
In Docker Compose, if you use the networking that allows you to use container names as urls/hosts to communicate back and forth, you need to make sure you don't use underscores in the name (per Nero's answer).
This took me forever to debug, so I'm including the initial error text below (again) so that hopefully Google serves this question a little higher to the next person trying to fix php curl on a docker network.
400 Bad Request Bad Request
Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request.
This error haunted me for days, untill I finally got it.
In my case, I was making a CURL to an URL that has an empty space at the end. Can you believe it? Removing the space was enough to work fine.
For me, the issue what that the upstream backend definition of the Nginx server contains an underscore. It was hard for me to detect because centos1 and centos were remote hosts running Nginx servers whereas the last server is localhost httpd server. The two remote servers were load-balancing fine, but not the last which makes it difficult to figure out that it was the _ underscore that is the issue.
Before
location / {
proxy_pass http://server_group;
}
upstream server_group {
server centos1;
server centos2;
server localhost:8085;
}
After
location / {
proxy_pass http://servergroup;
}
upstream servergroup {
server centos1;
server centos2;
server localhost:8085;
}
I am using Microsoft identity. In my case, user claims in table claims in db was very high. The user's problem was solved when I deleted the user's claims in the table and cleared the cookie in user browser.
Try check your URL again. Could be missing slash. :-)

It is possible to find out the version of Apache HTTPD when ServerSignature is off?

I have a question. Can I find out the version of Apache when full signature is disabled. Is it even possible? If it is, how? I think that is possible because blackhats hacking big, corporate servers while knowledge of the version of the victim services is essential. What do you think? Thanks.
Well for a start there are two (or even three) things to hide:
ServerHeader - which shows the version in the Server response field. This cannot be turned of in Apache config but can be reduced to just "Apache".
ServerSignature - which displays the server version in the footer of error pages.
X-Powered-By which is not used by Apache but used by back end servers and services it might send requests to (e.g. PHP, J2EE servers... etc.).
Now servers do show some information due to differences in how the operate or how they interpret spec. For example the order of response headers, capitalisation, how they respond to certain requests all give clues to what server software might be being used to answer http requests. However using this to fingerprint a specific version of that software is more tricky unless there was an obvious, observable change from the client side.
Other options include looking at server status-page - though you would hope any administrator clever enough to reduce the default server header would also restrict access to the server-status page. Or through another security hole (e.g. being able to upload load executable scripts or the like).
I would guess most hackers would more be aware of bugs and exploits in some versions of Apache or other web servers and try to see if any of those can be exploited rather than trying to guess the specific version first.
In fact, as an interesting aside, Apache themselves have long been of the opinion that hiding server header information is pointless "security through obscurity" (a point I, and many others, disagree with them on) even putting this in their documention:
Setting ServerTokens to less than minimal is not recommended because
it makes it more difficult to debug interoperational problems. Also
note that disabling the Server: header does nothing at all to make
your server more secure. The idea of "security through obscurity" is a
myth and leads to a false sense of safety.
And even allowing open access to their server-status page.

Premature end of script headers on one server but not another

I have two servers. They are both running the same code and are connected to the same database. But when I make a certain ajax call, one server works fine, and the other throws an Internal Server Error. Inside the Apache log of the internal service error server, it says 'Premature end of script headers'. This makes me think that there is some Apache error on one machine, but the Apache folder looks identical on both machines.
What sort of differences between servers would cause one to throw this error?
The error tells that you that no headers were output where they should be output. Typically it happens with a cgi script that fails for some reason and hence does not output anything, or outputs stuff before outputting the headers.
To debug it: Look in the apache error log. It will most likely tell you what went wrong.
To find the error_log: Look for the ErrorLog directive in the apache configuration file.
Swa66 etc have provided good answers but I thought that I would add a few more things to look into after you you reviewed the apache error logs which should give you the text that was returned where Apache was expecting a valid http header.
You mentioned that the servers are responding to Ajax queries and so it is possible that you are having some kind of CORS issues or issues relating to how Apache is configured to allow HTTP headers to be passed through to the CGI process.
It is not uncommon to include JWT or other auth headers as part of an ajax service and you may find that you need to add either a .htaccess file permitting these to flow through
Check your working server to see if the headers module is enabled ( - something like this in you httpd.conf - LoadModule headers_module libexec/apache2/mod_headers.so )
Also look for the .htaccess or domain specific configuration of the header related options. If need be, have a dig around relating to Apache CORS configuration and header related issues.
You may need some kinds of configuration similar to the following.
.htaccess
SetEnvIf Authorization .+ HTTP_AUTHORIZATION=$0
Header set Access-Control-Allow-Origin "*"
It is a little difficult to be certain of your issue without more specific diagnostics. Perhaps you could provide debugger details from your browser describing the Ajax/XHR requests including the headers etc and the relevant error log entries from apache. Assuming that you are using Perl/CGI perhaps snippets describing the CGI configuration from the config and any .htaccess files on the working and errant server.
You may also want to do a quick check of the permissions of the executables and try to run them from the command line.
It could be a very basic configuration option such as described at apache centos fails to serve images - premature end of script headers .. we really need a little more diagnostic detail to focus on possible causes.

How can I test a comet ajax site on a single host and work around browser simultaneous connection limit?

I am using the comet long-polling technique with apache, php, jquery.
I've got a basic comet update running and it works great. I'm now attempting to build a more complex comet script, and I want a better way to debug.
My comet scripts use $.ajax() with a long timeout, and the server side just sleeps until it either runs up to the timeout or has an event to send to the client. The comet requests go to a different subdomain than the main ajax requests.
For normal pages I edit and test on a linux laptop. I've got apache, mysql, and php with a test database and mirror image of the site. I can edit, save, and see the changes with no upload step. For the comet stuff I've been having to upload to a server to test. This requires me to set up a few fake servers, but mostly it requires me to upload changed files for each test. I've got a mostly automatic upload script, but it's still too slow.
The problem testing locally is the long timeout. The browser won't open another connection to the same server while the comet request is still open. I don't have a subdomain locally so I have all the requests going to the same server so they basically block each other.
I've tried a number of things to make this work and none really do it. I tried first to change my browser setting for number of simultaneous connections. This didn't work in firefox on linux, and I didn't find anything about changing this limit on other browsers.
I tried setting my hosts file to give me two names that map to my ip address. Then I tried configuring VirtualHost conf directives in apache, but that didn't work. I think because apache is looking for an actual dns server to tell it the hostname, not just my /etc/hosts file. Maybe I can run a local dns server to fool apache into thinking my box has two names, but that just seems like a real long way around this problem.
So, does anyone have an idea of how to make this work on one ip address/host?
I'm new to the comet thing, so maybe I've just got the wrong idea about something. Maybe this isn't even possible. Either way, it's time to just ask if this is already a solved problem.
It really should be possible to use /etc/hosts to fool Apache. It certainly does work on Ubuntu Hardy with Apache 2.2.
Try to give different hostname to you local address. Simply add a line like this to /etc/hosts:
127.0.0.1 a.example.com b.example.com c.example.com d.example.com
(Note: use a tab after IP)
Validate this with a ping
ping a.example.com
In you apache configuration, you may use a wildcard alias together with a named virtual host:
<VirtualHost *:80>
ServerName example.com
ServerAlias *.example.com
## snip ##
<VirtualHost>
Instead of using example.com, you might want to use something that's under your control. I use local subdomain of our company's domain (i.e. something.local.molindo.at).
Now you can use different subdomains for your test, each with its own limitation on concurrent connections.
You may need to restart your browser to get this working.
I have made something similar and my hosting gives my max queries limit reached which actually should not happen. But I have read that if my php code is in infinite loop.. ie the sleep mode the hosting detects it and makes db connection user as to be using more queries than allowed. That is alot to presume but I have found a solution to that with same speculations.