I need to test GeoIP apache module in local before seding it to production. For that, I need to send a fake IP when visiting an adress and not 127.0.0.1.
Question : Is there a way to fake localhost IP ? By configuring Apache for example ?
I ended up using cURL to do it.
There is an option in GeoIP when a proxy is used that make GeoIP to take in account the X-Forwarded-For header element. So by sending curl --header "X-Forwarded-For: 1.2.3.4" "http://your.site/path The job is done.
More info on https://serverfault.com/questions/747154/how-to-test-conditions-in-apache-configuration/747405#747405
Related
I understand you can expose a virtual url such as local.dev using ngrok http -host-header=local.dev.
However I can't seem to expose a virtual host of the following format sub.local.dev. If I enter the ngrok command: ngrok http -host-header=sub.local.dev it just looks forwards the following url which does not exist: http://.local.dev:80.
Please tell me there is some way to do this
Figured this out. It was a simple fix but it's not very clear in the documentation. I should have been using the following format: ngrok http -host-header=rewrite local.globalnews.ca
Ultimately my goal is to be able to load my PMS admin interface via Organizr. I had already tried simply using the URL https://app.plex.tv/desktop through Organizr, but that URL disallows loading the page in iFrames, so now I'm trying to use Caddy server to reverse proxy it to my local LAN IP instead ...
I have this code in my Caddyfile (note that my PMS is hosted on a different pc on my LAN):
proxy /pms https://192.168.234.234:32400 {
websocket
keepalive 12
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-For {remote}
header_upstream X-Forwarded-Proto {scheme}
transparent
}
Then when I try to visit the URL, it gives me a 502 Bad Gateway, and the Caddy log file says [ERROR 502 /pms] x509: cannot validate certificate for 192.168.234.234 because it doesn't contain any IP SANs
If I add the insecure_skip_verify directive, I get the error: 401 Unauthorized instead.
I'm still pretty new to using Caddy, anyone know what's going on here?
Since you use Caddy which will deal with the SSL, redirect to http instead of https.
To solve my particular problem; in Organizer I used the Plex web URL instead.
https://192.168.234.234:32400/web
Note the /web at the end.
Another option, was to have Organizr open it using the PopOut option, which just acts something like a regular bookmark, and loads any URL in a new tab, and/or add a line to the Caddyfile like this:
redir /pms https://app.plex.tv/desktop 301
Then in Organizr you could use either the /pms URL, or the direct Plex URL https://app.plex.tv/desktop, and it'd just load Plex in a new tab.
We have a server running Apache providing services via a simple API. We now stumbled upon the problem that we cannot access the API using a third-party library, altough the resulting HTTP request are ALMOST the same. The only difference - as far as we can tell from Wireshark - is the presence or absence of the explicit information about port 80. For example:
curl -d "..." http://www.example.com/foo/bar/
curl -d "..." http://www.example.com:80/foo/bar/
Both work, and Wireshark shows Host: www.example.com, i.e., without the port 80. As far as I understand cURL as well as browser or most other clients remove port 80. So far, all fine.
Now, a third-party library to make requests requires to set a port, and we need to set it to 80. If the library makes a request, Wiresharks now shows Host: www.example.com:80 - note the additional port information. This request fails, and as far as we can see in Wiresharks, this failing request only differs with respect to the host field.
Can this be a configuration issue of Apache? We currently have no direct access to the server to check the conf files. Or are we missing something completely different here.
From rfc 2616:
Host = "Host" ":" host [ ":" port ] ; Section 3.2.2
So "Host: www.example.com:80" is perfectly legitimate. But I have never seen port 80 (or 443 in the case of HTTPS) in the host field of a HTTP request. It is obviously required where the request is routed via a proxy to a non-standard port.
This would give me some concerns as to the quality of the "third-party library". My first of port of call in resolving this would be to speak to the providers of the component - they have presumably come across the problem before.
You did not mention what access you have to the library - did you check that this is not a configurable option? Do you have access to the source code, and the permission to modify it? (if not, that would imply it is commercial, paid-for software - which should give you the right to some support).
I don't know what the solution is, but some obvious things to try would be:
configure the URL at the default vhost for webserver rather than explicitly for www.example.com
or use mod_headers to rewrite the host field
or put a forward proxy in front of the webserver e.g. squid and add a url rewriter (if squid does not automatically strip the port from the host field)
Apache performs string matching with the Host field. So when the :80 is attached, the string matching will fail and Apache will consider it a URL it does not handle and reject it. That is why curl stripped it.
You can read more about the ServerName field here, which is the setting in which Apache matches against Host.
Update
So the :80 has no effect and the string matching still works.
On my production server, I did not change Apache's configuration. I wrote some quick PHP to send out the GET request on a socket, and Apache still responded correctly with the :80 attached to the Host: field.
I also checked on the server itself and see the request come in with the errant :80 attached to it and Apache answers with the status of 200 and presents the HTML.
There is something else wrong with the third party software's request.
I have a server of OVH company and I'm having some problems setting a subdomain for it.
My server configuration is something like this:
Apache service working at port 80 with the website and works only with https (apache config makes a redirection for http request to https).
PostgreSQL service on default port 5432
Gitlab installation working over nginx at port 81.
I'm trying to set the external_url for gitlab to http://git.example.com:81 but when I try to access, i'm being redirected to a OVH default page.
I can access gitlab if I set the external url to something like http://example.com:81 or even if I set a relative path like http://example.com:81/gitlab but I can't make it work with the subdomain http://git.example.com:81
How do you think I can get it working? Maybe I have to change DNS zone or something related to the redirections in the OVH web manager panel??
Thanks in advance! This is a really great community!
(Posted on behalf of the OP).
I just assign in the panel of the web hosting this redirection: git.example.com => example.com and that does the trick.
I have recently created a Rackspace cloud server instance using CentOS 5.5. I have used yum to install the "Web Server" group (it includes Apache, etc.), added www.booztrakr.com as the ServerName in httpd.conf, made sure iptables allows on port 80. I had registered this domain with Go-Daddy and changed their name servers to the Rackspace name servers on their site. I added "A" and CNAME records to the Rackspace name servers. httpd has been started. When I use curl on the server I can get the Apache landing page. When I dig www.booztrakr.com from a remote machine(over the internet) the answer section returns:
www.booztrakr.com. 300 IN CNAME booztrakr.com.
booztrakr.com. 300 IN A 184.106.216.156
When I try a browser or curl, it can't connect:
curl -G www.booztrakr.com
curl: (7) couldn't connect to host
I know this has got to be pretty basic and config related but I'll be dammed if I can see it. Any help would be appreciated. Thanks.
If dig resolves, this just means the DNS server returns the right values. It will even work if the IP doesn't exists.
If a HTTP connecting to the server fails, this is a configuration problem.
The server responds to ICMP requests, so it's not a routing problem.
When I use curl on the server I can get the Apache landing page
Your webserver is running, but you just can't reach it from outside. This is the problem. What does iptables --list outputs?