so far I developed completey locally, having everything (Apache, Openfire, JSJaC application) on my laptop, running quite fine. Now I want to use remote server for Apache/Openfire. I did basically the same steps, incl. the whole http-bind stuff. I test the setting with simpleclient.html provided by JSJaC.
Now here's the deal, if I use the simpleclient directly on the remote server - e.g., http://here.domain.org/simpleclient.html - it works. If I use it locally - e.g., http://[local_machine]/simpleclient.html - and with the same settings I get an 503 (service unavailable). It seems to be more a network/Apache issue than Openfire/JSJaC one, but I'm not an expert.
My parameters for the simpleclient:
HTTP Base: http://here.domain.org/http-bind/
JabberServer: here.domain.org
So in my apache virtual host conf file I have the lines:
AddDefaultCharset UTF-8
ProxyReqests On
ProxyPass /http-bind/ http://127.0.0.1:7070/http-bind/
So basically the http bind works since I can connect when the simpleclient.html resides on the server. What I tried so far:
checked if 7070 open from extern: yes
checked etc/hosts - here the relevant lines
127.0.0.1 localhost
123.123.123.123 here.domain.org here
checked Apache conf for restrictions: can't find any, basically i have an "Allow from all" everywhere (but I'm not completely sure where to look at)
By the way, with,e.g., Pidgin I can connect from my laptop to the remote server. Just the JSJaC simpleclient won't do. So I assume it's the http-bind that causes the trouble. I would understand if port 7070 weren't open, but it is.
Any hints or help are much appreciated!
Christian
Ok, I got it. It was a cross-domain scripting issue. I started looking into the JSJaC library and noticed that it makes XmlHttpRequests which by default won't work across different domains. I therefore had to allow this with Apache on the Openfire-Server. I added the follwing entries in the VirtualHost conf file:
Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Methods "POST, GET, OPTIIONS"
Header always set Access-Control-Allow-Credentials true
Header always set Access-Control-Allow-Headers "Content-Type, *"
Of course the mod_headers module must be loaded for this.
I'm not sure which entries are actually required, I didn't try every combinations. I think the always is needed since the request to the http-bind address is a proxy thingy.
Related
We have a server running Apache providing services via a simple API. We now stumbled upon the problem that we cannot access the API using a third-party library, altough the resulting HTTP request are ALMOST the same. The only difference - as far as we can tell from Wireshark - is the presence or absence of the explicit information about port 80. For example:
curl -d "..." http://www.example.com/foo/bar/
curl -d "..." http://www.example.com:80/foo/bar/
Both work, and Wireshark shows Host: www.example.com, i.e., without the port 80. As far as I understand cURL as well as browser or most other clients remove port 80. So far, all fine.
Now, a third-party library to make requests requires to set a port, and we need to set it to 80. If the library makes a request, Wiresharks now shows Host: www.example.com:80 - note the additional port information. This request fails, and as far as we can see in Wiresharks, this failing request only differs with respect to the host field.
Can this be a configuration issue of Apache? We currently have no direct access to the server to check the conf files. Or are we missing something completely different here.
From rfc 2616:
Host = "Host" ":" host [ ":" port ] ; Section 3.2.2
So "Host: www.example.com:80" is perfectly legitimate. But I have never seen port 80 (or 443 in the case of HTTPS) in the host field of a HTTP request. It is obviously required where the request is routed via a proxy to a non-standard port.
This would give me some concerns as to the quality of the "third-party library". My first of port of call in resolving this would be to speak to the providers of the component - they have presumably come across the problem before.
You did not mention what access you have to the library - did you check that this is not a configurable option? Do you have access to the source code, and the permission to modify it? (if not, that would imply it is commercial, paid-for software - which should give you the right to some support).
I don't know what the solution is, but some obvious things to try would be:
configure the URL at the default vhost for webserver rather than explicitly for www.example.com
or use mod_headers to rewrite the host field
or put a forward proxy in front of the webserver e.g. squid and add a url rewriter (if squid does not automatically strip the port from the host field)
Apache performs string matching with the Host field. So when the :80 is attached, the string matching will fail and Apache will consider it a URL it does not handle and reject it. That is why curl stripped it.
You can read more about the ServerName field here, which is the setting in which Apache matches against Host.
Update
So the :80 has no effect and the string matching still works.
On my production server, I did not change Apache's configuration. I wrote some quick PHP to send out the GET request on a socket, and Apache still responded correctly with the :80 attached to the Host: field.
I also checked on the server itself and see the request come in with the errant :80 attached to it and Apache answers with the status of 200 and presents the HTML.
There is something else wrong with the third party software's request.
I run a local development web server for testing out code changes.
Often I have to test my local changes with remote services that can only connect securely to another domain.
e.g. https://external1.com will only talk to https://someOtherDomain.com, but I've got to test integration of my new code changes with https://external1.com
While I've got a setup configured that works, it seems complex, and took a bit to get setup right. It seems to me that many developers would want to do this same thing, so my question is this:
Is there an easy way to proxy my local webserver as https://someOtherDomain.com ?
EDIT: So maybe this should be asked this way - Does a command line or GUI tool exist that you can pass a local port and a domain name, and it serves your local port securely over https://someOtherDomain.com - no config or SSL cert creation required? Of course it'd be nice if the SSL cert could be replaced through configuration if need be, but by default, it'd work automatically, by using a precanned SSL cert. And even though I'm using Apache, I'm looking for a solution that actually doesnt use Apache - it uses something else. Why? Because I want this solution to work well with any other webserver that's being used by people on our team - as we all run different stacks, and I'd like to be able to let any of us securely serve our sites without having to configure each webserver individually.
Here's my current setup for taking my local webserver and serving it up at https://www.someOtherDomain.com
To test this locally, I've been:
editing my hosts file, and adding an entry to make www.someOtherDomain.com point to my local machine, which of course is running my dev server. This makes it so my local site is now available at http://www.someOtherDomain.com
127.0.0.1 www.someOtherDomain.com
Running Apache with a SSL Cert setup and mod_proxy to redirect all https requests to my local http server, thus making my site available at https://www.someOtherDomain.com. Here's my Apache config for this:
ServerName www.someOtherDomain.com
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/
<Proxy balancer://mycluster>
BalancerMember http://localhost route=1
</Proxy>
ProxyPass / balancer://mycluster
ProxyPassReverse / balancer://mycluster
SSLHonorCipherOrder On
SSLProtocol -ALL +SSLv3 +TLSv1
SSLCipherSuite RC4-SHA:HIGH:!ADH
# Rewrite all http requests to https
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
I run this on a mac, but am interested in solutions for linux as well. I've seen various Man in the Middle proxy's that sound like they'd work with some configuration... but I'm looking for something really simple to install and run - not just for me, but something I can tell team members about too, as we may all have to do this a lot in the future.
IMPORTANT NOTE: My local webserver isn't running on Port 80, though I've put it this way in the above example, to keep it simple. I understand port 80 on a mac is a bit special, but am very happy with solutions that work fine on all ports but port 80.
I think mitmproxy can do this for you, as least on linux and os-x. I haven't tried it myself but this question seems to show how it is done. It's still not a trivial program though.
There are however other approaches, which I have used:
The first one is the most pretty simple one, create a DNS entry for develop.mydomain.com which points to 127.0.0.1 and a single certificate for a (sub)domain where you control the DNS. Spread that certificate to all your developers. They'll still need to setup SSL themself but they don't need to generate certificates anymore. It has the added benefit that everybody is developing against https://develop.mydomain.com which allows them to share the configuration.
For bonus points, create a DNS entry for *.develop.mydomain.com and a wildcard certificate and your developers can have different sites (e.g. https://project1.develop.mydomain.com and https://project2.develop.mydomain.com) on there local machine. (Contrary to what the internet sometimes tells you, name based virtual hosting works fine with SSL as long as the certificate is valid for all the named hosts). Since the domain is the same for everybody you can consider getting a valid wildcard certificate to get rid of the warnings.
Unlike the solutions below this works even outside of the office network, which may be relevant when people are working from home or at the customer.
Building on this you could also create DNS entries for the internal IP's of the developer machines (if those are fixed). This does add some work, but it means the ongoing work of a developer can be reached by others in the local network, which can be very convenient for demo's, testing on mobile devices, etc.
An other option is to configure a single machine to proxy for all your developers. Create a DNS entry pointing to the internal IP of this box on something like *.develop.mydomain.com, a matching wildcard certificate and configure that box once with the correct certificate. Now you can create a virtual host for each proxied server, and again, all sites will be reachable throughout the local network, but it does require the developer to have fixed ip addresses (or hostnames added to DNS through DHCP).
Combined with the ability of apache to include all files in a directory in it's configuration makes it trivial to create a script which adds a new site based on a template. All it has to do is write a new file based on the requested subdomain plus the destination and reload the apache config. This means something like a simple PHP script can do what you want the application to do.
If you want to expose your web on your local machine to the internet try Runscope Passageway, it's easy to setup and "just works" (from experience).
Another alternative is ngrok which I also used, but it didn't always work for me.
I am able to run neo4j fine through port 7474 on my server including cypher queries. Though when I access neo4j through the apache proxy it will load just fine but any requests done through cypher will only return an "Unknown error". I have other proxies such as rstudio running just fine.
I have tried the default values on the neo4j website for proxy configuration with no success. I am at a loss for what to try. Please let me know for more information needed, or how I can get additional information on the cypher error.
I tried the sample Query:
CREATE (n {name:"World"}) RETURN "hello", n.name
And this returns "Unknown error" when done through the proxy, but when done through port 7474 it works fine
This is a Linux Ubuntu LTR 12.04.4 machine.
Neo4j 2.1.1
Apache 2.2.22
Sorry if this is vague but I have not found any help for this issue nor do I know what additional information would be relevant.
Thank you.
Update:
It now works with the case provided by Stefan (Thank you!). But I am unsure how to change it from being on the root of my domain to "/database/" in your example you can change it to "/neo4j" How would I change the other parts of this config file for this to function?
As it looks now (non functional with change of proxy from "/"):
ProxyPass /database/ http://localhost:7474/
ProxyPassReverse /database/ http://localhost:7474/
RedirectMatch permanent ^/database /database/
<Location /db/manage>
AddOutputFilterByType SUBSTITUTE application/json
Substitute "s|http://localhost:7474|http://localhost:8080|n"
</Location>
I tried to change the substitute rule from "localhost:8080" to "localhost:8080/database" and to "/database" to no avail.
In closing what worked is to make it a subdomain and still have it on the root. Not sure why this has to be the case, but it is functional. Thank you again Stefan!
Some time ago I've setup a example config for using mod_proxy and mod_substitute, see https://github.com/sarmbruster/vagrant_neo4j_modproxy. See esp the Apache config file.
Be aware that mod_substitute will not work with huge responses > 1M.
In my apache instillation, I keep seeing the following line in my access logs:
"POST http://yourinfo.allrequestsallowed.net/ HTTP/1.1" 200
It's really freaking me out because this site is not being hosted on my server (I checked the IP just to be 100% sure). I added a "Deny all" line since the site is still in development, and now the HTTP 200 response changed to 403, like the domain is being hosted on my server.
I'm incredibly confused and scared. Does anybody know what's going on? Can I Deny all to this domain that's apparently pointing to my server?
You may want to check to make sure you don't have ProxyRequests On set anywhere where it's not supposed to. Typically a request like that is for a forward proxy and the troubling bit is that you returned a 200 response which could indicate that the request was successfully proxied.
Take a look at this wiki page about Proxy abuse.
My server is properly configured not to proxy, so why is Apache returning a 200 (Success) status code?
That status code indicates that Apache successfully sent a response to the client, but not necessarily that the response was retrieved from the foreign website.
RFC2616 section 5.1.2 mandates that Apache must accept requests with absolute URLs in the request-URI, even for non-proxy requests. This means that even when proxying is turned off, Apache will accept requests that look like proxy requests. But instead of retrieving the content from the foreign site, Apache will serve the content at the corresponding location on your website. Since the hostname probably doesn't match a name for your site, Apache will look for the content on your default host.
But it's probably worthwhile to check that you aren't proxying. Otherwise, it's not really that big of a deal.
After Jon Lin pointed me in the right direction, I figured it out.
After disabling mod_proxy and enabling mod_security, I added the following to my virtual host configuration:
SecRuleEngine On
SecRule REQUEST_LINE "://" drop,phase:1
And then restarted apache. It quits the connection and returns any amount of data, which uses less resources and bandwidth during Brute Force and DDOS attacks.
Also, it shows as an HTTP 404 Response in the access logs.
EDIT: I updated the rule to drop all types or proxies (https,https,ftp). I don't know how many protocols can be used this way, but I'd rather be safe than sorry.
I've been dealing with this whole day and still can't figure it out.
I've setup Zabbix on one machine, and I want to monitor the Apache server on another machine.
I've completed all the steps described in the docs: http://www.zabbix.com/wiki/templates/apache
and still I get no data in the Apache Template. When checking the logs on the Apache server, I can see in access.log:
IPADDR - - [16/Jul/2012:13:29:08 +0000] "GET /server-status?auto HTTP/1.0" 404 13826 "-" "Python-urllib/1.17"
I think it might has something to do with Virtual Servers and additional sites I have on that machine, but I can't figure it out, and nothing mentioned in the docs...
The Apache checks are not as clever as you may think.
Can Zabbix communicate with your apache server? Link it to a template with something simple like "uptime" and verify that it indeed gets data.
Next, verify that there aren't any firewall rules prevent the zabbix server from communicating with your web server. Can you curl your homepage from the zabbix host without problems?
Are the apache checks active checks? If so you'll need to make sure active checks are enabled in the /etc/zabbix/zabbix-agentd.conf file and that the "Hostname" within the conf is unique and matching up correctly with what you have in the zabbix server.
If that fails, change the DebugLevel to 4 in /etc/zabbix-agentd.conf and tail the zabbix log. Look and see if it is having trouble with any checks.
This is an apache configuration problem, zabbix can't do anything if /server-status yields a 404 error.
Maybe the <Location /server-status>...</Location> directive is not at the right place in apache's configuration.
Try to move it inside the <VirtualHost> section of the specific virtual server to which the GET /server-status is routed.
Also make sure that mod_status is enabled.