I ma trying to figure out which of the hdrs to use in this situation. According to the documentation http://www.haproxy.org/download/1.5/doc/configuration.txt the following is stated:
hdr(<name>) The HTTP header <name> will be looked up in each HTTP
request. Just as with the equivalent ACL 'hdr()' function,
the header name in parenthesis is not case sensitive. If the
header is absent or if it does not contain any value, the
roundrobin algorithm is applied instead.
An optional 'use_domain_only' parameter is available, for
reducing the hash algorithm to the main domain part with some
specific headers such as 'Host'. For instance, in the Host
value "haproxy.1wt.eu", only "1wt" will be considered.
This algorithm is static by default, which means that
changing a server's weight on the fly will have no effect,
but this can be changed using "hash-type".
1) Where is the list of different <name>s?
2) Which one do I use when trying to use haproxy as a reverse proxy in this case (subdomains), would I use hdr() or would I use hdr_dom() for example:
acl host_deusexmachina hdr(<name>) -i deus.ex.machina.mydomain.com
acl host_fela hdr(<name>) -i fela.mydomain.com
acl host_mydomain hdr(<name>) -i mydomain.com
The different names are the headers available in the HTTP protocol.
You should probably use Host.
Related
I'm trying to setup a reverse proxy to my S3 bucket (I'm using DigitalOcean Spaces) using Haproxy (specifically Haproxy Ingress).
After some trial and error, I got somewhere with the proxy, but it doesn't work quite yet.
A GET request works fine, however, a PUT request (like putObject) doesn't work, because I get the error "403 - SignatureDoesNotMatch". I can't seem to find why that is unfortunately and I've search far and wide.
My backend at the moment is as follows:
backend s3-reverse-proxy_443
mode http
balance roundrobin
acl https-request ssl_fc
http-request redirect scheme https if !https-request
http-request set-header Host <bucket>.ams3.digitaloceanspaces.com
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000"
server srv001 5.101.110.225:443 weight 1 proto h2 alpn h2 ssl no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets verify none check inter 2s
Tried overruling the server by just using the ".ams3.digitaloceanspaces.com", but that didn't work.
I think it has something to do with the headers, but I've tried adding "Authorization" & "Connection" headers, but none of them seem to work.
I'm also using backend-protocol "h2-ssl", because without it, it didn't proxy.
Thanks in advance!
Made some progress, signature version v4 doesn't work, but v2 does.
However, if I'm correct, the docker registry uses v4, and I want it to be compatible with the newest standards.
I don't know much about S3, I'm currently reading the docs about the difference in authentication, but any help would be welcome!
So, after some more investigation, the signature version v4 uses the request URI to calculate the signature. When the bucket itself calculates that same signature, the request URI is different because it listens to another URI.
I've seen some people that are using nginx to recalculate the signature when the request is handled by nginx, but haven't found a way to do that in Haproxy.
The best way to go now is to use signature version v2, however, that may be deprecated for most S3 bucket providers.
I am trying to access a service over HTTPS but due to restrictive network settings I am trying to make the request through an ssh tunnel.
I create the tunnel with a command like:
ssh -L 9443:my-service.com:443 sdt-jump-server
The service is only available via HTTPS, its hosted with a self-signed certificate, and it is behind a load-balancer that uses either the hostname or an explicit Host header to route incoming requests to the appropriate backend service.
I am able to invoke the endpoint from my local system using curl like
curl -k -H 'Host: my-service.com' https://localhost:9443/path
However, when I try to use the CXF 3.1.4 implementation of JAX-RS to make the very same request, I can't seem to make it work. I configured a hostnameVerifier to allow the connection, downloaded the server's certificate, and added it to my truststore. Now I can connect, but it seemed like the load-balancer was not honoring the Host header that I'm trying to set.
I was lost for a bit until I set -Djavax.net.debug and saw that the Host header being passed was actually localhost and not the value I set. How to make CXF honor the Host header I'm setting instead of using the value from the URL of the WebTarget?!
CXF uses HttpUrlConnection, so you need to set a system property programmatically
System.setProperty("sun.net.http.allowRestrictedHeaders", "true")
or at startup:
-Dsun.net.http.allowRestrictedHeaders=true
See also How to overwrite http-header "Host" in a HttpURLConnection?
I have a x.example which serves traffic for both a.example and b.example.
x.example has certificates for both a.example and b.example. The DNS for a.example and b.example is not yet set up.
If I add an /etc/hosts entry for a.example pointing to x.example's ip and run curl -XGET https://a.example, I get a 200.
However if I run curl --header 'Host: a.example' https://x.example, I get:
curl: (51) SSL: no alternative certificate subject name matches target
host name x.example
I would think it would use a.example as the host. Maybe I'm not understanding how SNI/TLS works.
Because a.example is an HTTP header the TLS handshake doesn't have access to it yet? But the URL itself it does have access to?
Indeed SNI in TLS does not work like that. SNI, as everything related to TLS, happens before any kind of HTTP traffic, hence the Host header is not taken into account at that step (but will be useful later on for the webserver to know which host you are connecting too).
So to enable SNI you need a specific switch in your HTTP client to tell it to send the appropriate TLS extension during the handshake with the hostname value you need.
In case of curl, you need at least version 7.18.1 (based on https://curl.haxx.se/changes.html) and then it seems to automatically use the value provided in the Host header. It alo depends on which OpenSSL (or equivalent library on your platform) version it is linked to.
See point 1.10 of https://curl.haxx.se/docs/knownbugs.html that speaks about a bug but explains what happens:
When given a URL with a trailing dot for the host name part: "https://example.com./", libcurl will strip off the dot and use the name without a dot internally and send it dot-less in HTTP Host: headers and in the TLS SNI field.
The --connect-to option could also be useful in your case. Or --resolve as a substitute to /etc/hosts, see https://curl.haxx.se/mail/archive-2015-01/0042.html for am example, or https://makandracards.com/makandra/1613-make-an-http-request-to-a-machine-but-fake-the-hostname
You can add --verbose in all cases to see in more details what is happening. See this example: https://www.claudiokuenzler.com/blog/693/curious-case-of-curl-ssl-tls-sni-http-host-header ; you will also see there how to test directly with openssl.
If you have a.example in your /etc/hosts you should just run curl with https://a.example/ and it should take care of the Host header and hence SNI (or use --resolve instead)
So to answer your question directly, replace
curl --header 'Host: a.example' https://x.example
with
curl --connect-to a.example:443:x.example:443 https://a.example
and it should work perfectly.
The selected answer helped me find the answer, even though it does not contain the answer. The answer in the mail/archive link Patrick Mevzek provided has the wrong port number. So even following that answer will cause it to continue to fail.
I used this container to run a debugging server to inspect the requests. I highly suggest anyone debugging this kind of issue do the same.
Here is how to address the OP's question.
# Instead of this:
# curl --header 'Host: a.example' https://x.example
# Do:
host=a.example
target=x.example
ip=$(dig +short $target | head -n1)
curl -sv --resolve $host:443:$ip https://$host
If you want to ignore bad certificates matches, use -svk instead of -sv
curl -svk --resolve $host:443:$ip https://$host
Note: Since you are using https, you must use 443 in the --resolve argument instead of 80 as was stated on the mail/archive
I had a similar need. Didn't have sudo access to update hosts file.
I use resolve parameter and also added the DNS host name as a header parameter.
--resolve <dns name>:<port>:<ip addr>
curl --request POST --resolve dns_name:443:a.b.c.d 'https://dns_name/x/y' --header 'Host: dns_name' ....
Cheers..
The default installation instructions show how to set up a server on port 80 using HTTP and WS (i.e. unencrypted).
The agent installation shows that TLS enabled servers are possible (I'l link here, but I'm not allowed).
The server configuration options show that DRONE_SERVER_CERT and DRONE_SERVER_KEY are available http://readme.drone.io/0.5/install/server-configuration/
Are there any fuller instructions to set this up? e.g. have port 80 forward to port 443 and have all agents talking to the server over encrypted channels.
If you were using certificates with drone 0.4 it will be the same configuration, although the names perhaps changed slightly. You will need to pass the following variables to your container:
DRONE_SERVER_CERT=/path/to/drone.cert
DRONE_SERVER_KEY=/path/to/drone.key
These certificates will exist on your host machine, which means their paths need to be mounted into your drone server:
--volume=/path/to/drone.cert:/path/to/drone.cert
--volume=/path/to/drone.key:/path/to/drone.key
You can also instruct Docker to expose 443 and forward to drone's default port 8000
-p 443:8000
When you configure the agent, you will of course need to update the configuration to use wss. You can read more in the agent docs, but essentially something like this:
DRONE_SERVER=wss://drone.server.com/ws/broker
And finally, if you get cert errors I recommend including the cert chain in your bundle. Bottom line, drone does not parse certs. Drone uses http.ListenAndServeTLS(cert, key). So any cert issues are coming from the standard library directly, and questions should therefore be directed to the Go support channels.
So I have a whole bunch of machines on my 10.10.10.x subnet, all of them are essentially configured in the same way. I differentiate these from machines on my 10.10.11.x subnet which serves a different purpose.
I'd like to be able to type 'ssh 10.x' to connect to machines on the 10. network and 'ssh 11.x' to connect to machines on the 11 network.
I know I can setup individual machines to allow access to the full ip, or the shorthand version like this in my ~/.ssh/config:
Host 10.10.10.11 10.11
HostName 10.10.10.11
User root
This can get pretty repetitive for lots of hosts on my network, so my question is, is there a way to specify this as a pattern, for the entire subnet, something like:
Host 10.10.10.x
User root
Host 10.x
HostName 10.10.10.x
User root
Thanks
This line will provide the desired functionality:
Host 192.168.1.*
IdentityFile KeyFile
If you attempt to connect a server whose ip is in this subnet, you will be able to establish an ssh connection.
From the ssh_config(5) Manpage:
A pattern consists of zero or more non-whitespace characters, ‘*’ (a
wildcard that matches zero or more characters), or ‘?’ (a wildcard that
matches exactly one character). For example, to specify a set of decla‐
rations for any host in the “.co.uk” set of domains, the following pat‐
tern could be used:
Host *.co.uk
The following pattern would match any host in the 192.168.0.[0-9] network
range:
Host 192.168.0.?
A pattern-list is a comma-separated list of patterns. Patterns within
pattern-lists may be negated by preceding them with an exclamation mark
(‘!’). For example, to allow a key to be used from anywhere within an
organisation except from the “dialup” pool, the following entry (in
authorized_keys) could be used:
from="!*.dialup.example.com,*.example.com"
So you can just use host 10.*