I'm trying to setup a reverse proxy to my S3 bucket (I'm using DigitalOcean Spaces) using Haproxy (specifically Haproxy Ingress).
After some trial and error, I got somewhere with the proxy, but it doesn't work quite yet.
A GET request works fine, however, a PUT request (like putObject) doesn't work, because I get the error "403 - SignatureDoesNotMatch". I can't seem to find why that is unfortunately and I've search far and wide.
My backend at the moment is as follows:
backend s3-reverse-proxy_443
mode http
balance roundrobin
acl https-request ssl_fc
http-request redirect scheme https if !https-request
http-request set-header Host <bucket>.ams3.digitaloceanspaces.com
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000"
server srv001 5.101.110.225:443 weight 1 proto h2 alpn h2 ssl no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets verify none check inter 2s
Tried overruling the server by just using the ".ams3.digitaloceanspaces.com", but that didn't work.
I think it has something to do with the headers, but I've tried adding "Authorization" & "Connection" headers, but none of them seem to work.
I'm also using backend-protocol "h2-ssl", because without it, it didn't proxy.
Thanks in advance!
Made some progress, signature version v4 doesn't work, but v2 does.
However, if I'm correct, the docker registry uses v4, and I want it to be compatible with the newest standards.
I don't know much about S3, I'm currently reading the docs about the difference in authentication, but any help would be welcome!
So, after some more investigation, the signature version v4 uses the request URI to calculate the signature. When the bucket itself calculates that same signature, the request URI is different because it listens to another URI.
I've seen some people that are using nginx to recalculate the signature when the request is handled by nginx, but haven't found a way to do that in Haproxy.
The best way to go now is to use signature version v2, however, that may be deprecated for most S3 bucket providers.
Related
A pod is accessible via nginx-ingress and https://FQDN. That works well with the configured public certificates. But if someone uses https://IP_ADDRESS - he will get a certificate error because of the default "Kubernetes Fake Certificate". Is it possible to block access completely using the IP_ADDRESS url?
I think you would first need the TLS handshake to complete, before Nginx could deny the access.
On the other hand, HAproxy may be able to close the connection while checking the ServerName. Say setting some ACL in your https frontend, routing applications to their backends. Though I'm not sure this would be doable unless mounting a custom HAproxy configuration template into your ingress controller.
I am trying to access a service over HTTPS but due to restrictive network settings I am trying to make the request through an ssh tunnel.
I create the tunnel with a command like:
ssh -L 9443:my-service.com:443 sdt-jump-server
The service is only available via HTTPS, its hosted with a self-signed certificate, and it is behind a load-balancer that uses either the hostname or an explicit Host header to route incoming requests to the appropriate backend service.
I am able to invoke the endpoint from my local system using curl like
curl -k -H 'Host: my-service.com' https://localhost:9443/path
However, when I try to use the CXF 3.1.4 implementation of JAX-RS to make the very same request, I can't seem to make it work. I configured a hostnameVerifier to allow the connection, downloaded the server's certificate, and added it to my truststore. Now I can connect, but it seemed like the load-balancer was not honoring the Host header that I'm trying to set.
I was lost for a bit until I set -Djavax.net.debug and saw that the Host header being passed was actually localhost and not the value I set. How to make CXF honor the Host header I'm setting instead of using the value from the URL of the WebTarget?!
CXF uses HttpUrlConnection, so you need to set a system property programmatically
System.setProperty("sun.net.http.allowRestrictedHeaders", "true")
or at startup:
-Dsun.net.http.allowRestrictedHeaders=true
See also How to overwrite http-header "Host" in a HttpURLConnection?
I'm having some trouble finding any way to make my situation workable. I have 2 applications:
1: External service web application running on sub1.domain.com. If I run this application behind traefik with acme (LetsEncrypt) it works fine. I have a few more backend services (api/auth) that all run with a valid LetsEncrypt certificate and get their http traffic redirected to https by traefik
[entryPoints.http.redirect]
entryPoint = "https"
I have to have some form of http to https forwarding for this service.
2: Internal service web application running on sub2.domain.com. I have a self signed trusted certificate (internal CA) which works fine behind traefik if I set it as a default certificate, or if I use it in the application itself (inside tomcat). However, since it is an internal service I can live without ssl for this if it solves my problem. However, this does not work with traefik's http to https forwarding.
I have been trying to get these 2 services to run behind the same traefik instance but all the possible scenarios I could think of do not work because they are either still work in progress or just plain not working.
Scenarios
1: No http to https redirect, don't bother with https for the internal service and just use http. Then inside the backend for the external webservice redirect to https.
Problems:
Unable to have 2 traefik ports which traefik forwards too Unable to
forward 1 single port to another proto (since the backend is always
either http or https port)
Use ACME over the default cert
2: Use ACME over default certificate
someone else thought this was a good idea. It's just not working yet.
3: Re-use backend ssl certificate. Have traefik just redirect without "ssl termination". I'm not sure if this is the same thing but there is an option called "passTLSCert". However it seems that this is only possible with frontends defined in the .toml file which do not work (probably because I use docker for backends).
4: use DNS-01 challenge to create an SSL certificate for my internal service.
Sounds like this could work, so I'm now using CloudFlare and have an api key. However, it does not seem to work for subdomains. and there is no reply on my issue report: https://github.com/containous/traefik/issues/1953
EDIT: I might be able to fix the issue described in 4 to get this to work. It seems the internal DNS might be conflicting with traefik
Someone decided that on our internal DNS zones would be added per subdomain, meaning that the SOA request returned the subdomain as the name. This does not play nice with cloudflare since the internal dns zone is not the same as the cloudflare dns.
Changing this to a main zone with a records for the subdomains fixed the issue (in combination with the delayDontCheckDNS option).
I found some inconvenience in haproxy 1.5 when i try to configure SSL SNI.
There is a fragment of haproxy configuration: pastebin
I would like to pass client IP to backend. No matter how I configure reqadd / set-header X-Forwarded-For / Real-IP I always got a haproxy IP address in X-Forwarded-For.
Someone try to pass real IP with SSL SNI on HAProxy
? :/
From this configuration, you seem to be doing SNI-sniffing, yet all of the backends are looping back to HAProxy itself... which is not a case where SNI-sniffing is required. Perhaps I'm overlooking something else that would require this.
It should be apparent why you are getting the proxy's IP in X-Forwarded-For -- HAProxy is talking to itself. The first pass through the proxy is the client connection, as far as the proxy can determine on the second pass, because only the second pass is speaking HTTP. It only sees that an incoming TCP connection has arrived... from itself.
The solution is for the first-pass backend to pass the original client information using the Proxy Protocol and the second-pass frontend to decode it.
Add accept-proxy to the bind lines for the second-pass frontends, and add send-proxy to the server lines on the first-pass backends. This way, on the connection where HAProxy is talking to itself, the first-pass backend will send the Proxy protocol preamble and the second-pass frontend will decode the incoming value and place it in X-Forwarded-For.
I need to setup a reverse proxy which intercepts HTTPS requests, decrypts them, performs body adaptation and finally forwards the re-encrypted request.
I'm now using Squid which provides support for eCAP plugins and ssl bumping: http://wiki.squid-cache.org/Features/SslBump
If I understood well, by configuring SSL bumping I can do exactly what I said above. However, ssl bumping is not working for now.
Here is my Squid configuration:
https_port 8080 cert=/etc/squid/cert.pem key=/etc/squid/key.pem
http_port 3128 ssl-bump cert=/etc/squid/cert.pem key=/etc/squid/key.pem dynamic_cert_mem_cache_size=4MB generate-host-certificates=on
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS
#always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
Client-side, when trying to send a request to https:// 127.0.0.1:8080 I'm getting the following error:
Connection reset by peer
This happens if the destination server is running HTTPS. Looks like Squid is trying to establish a simple HTTP connection instead of a HTTPS request. Indeed, server-side I'm getting a SSL23_GET_CLIENT_HELLO error.
Is there anything wrong in my configuration? Is there anything I missed in how SSL bump works?
I digged into the problem and here is what I found:
1) ssl-bump option is not needed
2) the problem was that in the following line the ssl option was missing
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS **ssl**