I need to setup a reverse proxy which intercepts HTTPS requests, decrypts them, performs body adaptation and finally forwards the re-encrypted request.
I'm now using Squid which provides support for eCAP plugins and ssl bumping: http://wiki.squid-cache.org/Features/SslBump
If I understood well, by configuring SSL bumping I can do exactly what I said above. However, ssl bumping is not working for now.
Here is my Squid configuration:
https_port 8080 cert=/etc/squid/cert.pem key=/etc/squid/key.pem
http_port 3128 ssl-bump cert=/etc/squid/cert.pem key=/etc/squid/key.pem dynamic_cert_mem_cache_size=4MB generate-host-certificates=on
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS
#always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
Client-side, when trying to send a request to https:// 127.0.0.1:8080 I'm getting the following error:
Connection reset by peer
This happens if the destination server is running HTTPS. Looks like Squid is trying to establish a simple HTTP connection instead of a HTTPS request. Indeed, server-side I'm getting a SSL23_GET_CLIENT_HELLO error.
Is there anything wrong in my configuration? Is there anything I missed in how SSL bump works?
I digged into the problem and here is what I found:
1) ssl-bump option is not needed
2) the problem was that in the following line the ssl option was missing
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS **ssl**
Related
I have app (backend part) running e.g. on: https://bla.com:8443. I created a certificate for it via letsencrypt for domain bla.com.
When I tried to receive payments (webhook) from www.stripe.com I end up with TLS error. After some investigation I figure out that problem is with invalid certificate chain for https://bla.com:8443 and if I would run it on https://bla.com:443 everything would be ok.
I can't change it to port 443 because on https://bla.com:443 is running frontend part of the app.
I thought about 2 solutions, but my technical knowledge is quite limited so I am not sure if its possible:
create certificate for domain + port
run frontend & backend part on same port: https://bla.com:443 and configure apache2 to forward all /backend-api/* to https://bla.com:8443/backend-api/*
My question is, is any of the proposals possible and more importantly is there any better solution which I am missing?
Thanks for any suggestions!
A certificate is not bound to a port. It is perfectly fine to use the same certificate on port 443 and 8443. But the servers on port 443 and 8443 have a different configuration. If it works on 443 but not on 8443 this is likely due to some error in the configuration on port 8443. The fix is thus to have the correct configuration and not to work around with a different certificate or somehow to reverse proxy it from port 443.
Unfortunately details on how to exactly fix it cannot be given since the current configuration is not known.
Configuring the program to use fullchain.pem instead of cert.pem fixed it for me.
I found some inconvenience in haproxy 1.5 when i try to configure SSL SNI.
There is a fragment of haproxy configuration: pastebin
I would like to pass client IP to backend. No matter how I configure reqadd / set-header X-Forwarded-For / Real-IP I always got a haproxy IP address in X-Forwarded-For.
Someone try to pass real IP with SSL SNI on HAProxy
? :/
From this configuration, you seem to be doing SNI-sniffing, yet all of the backends are looping back to HAProxy itself... which is not a case where SNI-sniffing is required. Perhaps I'm overlooking something else that would require this.
It should be apparent why you are getting the proxy's IP in X-Forwarded-For -- HAProxy is talking to itself. The first pass through the proxy is the client connection, as far as the proxy can determine on the second pass, because only the second pass is speaking HTTP. It only sees that an incoming TCP connection has arrived... from itself.
The solution is for the first-pass backend to pass the original client information using the Proxy Protocol and the second-pass frontend to decode it.
Add accept-proxy to the bind lines for the second-pass frontends, and add send-proxy to the server lines on the first-pass backends. This way, on the connection where HAProxy is talking to itself, the first-pass backend will send the Proxy protocol preamble and the second-pass frontend will decode the incoming value and place it in X-Forwarded-For.
I installed Wowza and is Streaming by this links:
HTTP:
http ://[my-ip]:1935/myapp/definst/mp4:00.Intro.mp4/manifest.mpd
and also on
http ://[my-subdomain]:1935/myapp/definst/mp4:00.Intro.mp4/manifest.mpd
When is config Wowza to be able to stream on port 80, it works again on these links:
http ://[my-ip]/myapp/definst/mp4:00.Intro.mp4/manifest.mpd
http ://[my-subdomain]/myapp/definst/mp4:00.Intro.mp4/manifest.mpd
but we must stream over SSL protocol.
means: HTTPS:
https ://[my-subdomain]/myapp/definst/mp4:00.Intro.mp4/manifest.mpd
We installed a wildcard SSL on our server and everything is working great. In general, port 1935 does not work over HTTPS! even when we add port 80 to Wowza, HTTPS connection is refused and we can't have streaming over https.
How can we stream over SSL on wowza? even with or without port 1935
Thanks
Yes, Wowza server supports streaming with SSL using StreamLock or your own SSL certificate.
You will need to set up a different port number for HTTPS. It could be that another process is using port 80. Port 443 is typically used.
From the Server tab, click Edit.
Click Add Host Port and fill in fields.
Check Enable SSL/StreamLock.
Save and re-start Wowza server.
Look in [install-dir]/logs/wowzastreamingengine_access.log for errors. It will give a clue as to whether there is a problem with the certificate, password or other.
I was recommend place a LB infront of my Wowza for SSL offloading so you can load the m3u8 over SSL. I was also told you can do that quite easily using HA Proxy for example. It is explained how to accomplish this here for RTMP but the same can obviously done with HTTP:
https://github.com/arut/nginx-rtmp-module/issues/457#issuecomment-250783255
Note, I have not tried this yet and I am unclear on exactly the proper use scenario. Nor, have I successfully enable StreamLock with my own cert nor the cert provided through Wowza. If I manage to do so I will update this thread. Hope this is helpful.
I have the following situation: 2 hosts, one is a client and the other an HTTPS server.
Client (:<brwsr-port>) <=============> Web server (:443)
I installed Fiddler on the server so that I now have Fiddler running on my server on port 8888.
The situation i would like to reach is the following:
|Client (:<brwsr-port>)| <===> |Fiddler (:8888) <===> Web server (:443)|
|-Me-------------------| |-Server--------------------------------|
From my computer I want to contact Fiddler which will redirect traffic to the web server. The web server however uses HTTPS.
On The server I set up Fiddler to handle HTTPS sessions and decrypt them. I was asked to install on the server Fiddler's fake CA's certificate and I did it! I also inserted the script suggested by the Fiddler wiki page to redirect HTTPS traffic
// HTTPS redirect -----------------------
FiddlerObject.log("Connect received...");
if (oSession.HTTPMethodIs("CONNECT") && (oSession.PathAndQuery == "<server-addr>:8888")) {
oSession.PathAndQuery = "<server-addr>:443";
}
// --------------------------------------
However when I try https://myserver:8888/index.html I fail!
Failure details
When using Fiddler on the client, I can see that the CONNECT request starts but the session fails because response is HTTP error 502. Looks like no one is listening on port 8888. In fact, If I stop Fiddler on the server I get the same situation: 502 bad gateway.
Please note that when I try https://myserver/index.html and https://myserver:443/index.html everything works!
Question
What am I doing wrong?
Is it possible that...?
I thought that since maybe TLS/SSL works on port 443, I should have Fiddler listen there and move my web server to another port, like 444 (I should probably set on IIS an https binding on port 444 then). Is it correct?
If Fiddler isn't configured as the client's proxy and is instead running as a reverse proxy on the Server, then things get a bit more complicated.
Running Fiddler as a Reverse Proxy for HTTPS
Move your existing HTTPS server to a new port (e.g. 444)
Inside Tools > Fiddler Options > Connections, tick Allow Remote Clients to Connect. Restart Fiddler.
Inside Fiddler's QuickExec box, type !listen 443 ServerName where ServerName is whatever the server's hostname is; for instance, for https://Fuzzle/ you would use fuzzle for the server name.
Inside your OnBeforeRequest method, add:
if ((oSession.HostnameIs("fuzzle")) &&
(oSession.oRequest.pipeClient.LocalPort == 443) )
{
oSession.host = "fuzzle:444";
}
Why do you need to do it this way?
The !listen command instructs Fiddler to create a new endpoint that will perform a HTTPS handshake with the client upon connection; the default proxy endpoint doesn't do that because when a proxy receives a connection for HTTPS traffic it gets a HTTP CONNECT request instead of a handshake.
I just ran into a similar situation where I have VS2013 (IISExpress) running a web application on HTTPS (port 44300) and I wanted to browse the application from a mobile device.
I configured Fiddler to "act as a reverse proxy" and "allow remote clients to connect" but it would only work on port 80 (HTTP).
Following on from EricLaw's suggestion, I changed the listening port from 8888 to 8889 and ran the command "!listen 8889 [host_machine_name]" and bingo I was able to browse my application on HTTPS on port 8889.
Note: I had previously entered the forwarding port number into the registry (as described here) so Fiddler already knew what port to forward the requests on to.
I am writing an HTTP proxy and I am having trouble understanding some details of making a CONNECT request over TLS. To get a better picture, I am experimenting with Apache to observe how it interacts with clients. This is from my default virtual host.
NameVirtualHost *:443
<VirtualHost>
ServerName example.com
DocumentRoot htdocs/example.com
ProxyRequests On
AllowConnect 22
SSLEngine on
SSLCertificateFile /root/ssl/example.com-startssl.pem
SSLCertificateKeyFile /root/ssl/example.com-startssl.key
SSLCertificateChainFile /root/ssl/sub.class1.server.ca.pem
SSLStrictSNIVHostCheck off
</VirtualHost>
The conversation between Apache and my client goes like this.
a. client connects to example.com:443 and sends example.com in the TLS handshake.
b. client sends HTTP request.
CONNECT 192.168.1.1:22 HTTP/1.1
Host: example.com
Proxy-Connection: Keep-Alive
c. Apache says HTTP/1.1 400 Bad Request. The Apache error log says
Hostname example.com provided via SNI and hostname 192.168.1.1
provided via HTTP are different.
It appears that Apache does not look at the Host header other than to see that it is there since HTTP/1.1 requires it. I get identical failed behavior if the client sends Host: foo. If I make the HTTP request to example.com:80 without TLS, then Apache will connect me to 192.168.1.1:22.
I don't completely understand this behavior. Is there something wrong with the CONNECT request? I can't seem to locate the relevant parts of the RFCs that explain all this.
It's not clear whether you're trying to use Apache Httpd as a proxy server, this would explain the 400 status code you're getting.
CONNECT is used by the client, and sent to the proxy server (possibly Apache Httpd, but usually not), not to the destination web server.
CONNECT is used between the client and the proxy server before establishing the TLS connection between the client and the end server. The client (C) connects to the proxy (P) proxy.example.com and sends this request (including blank line):
C->P: CONNECT www.example.com:443 HTTP/1.1
C->P: Host: www.example.com:443
C->P:
The proxy opens a TCP connection to www.example.com:443 (P-S) and responds to the client with a 200 status code, accepting the request:
P->C: 200 OK
P->C:
After this, the connection between the client and the proxy (C-P) is kept open. The proxy server relays everything on the C-P connection to and from P-S. The client upgrades its active (P-S) connection to an SSL/TLS connection, by initiating a TLS handshake on that channel. Since everything is now relayed to the server, it's as if the TLS exchange was done directly with www.example.com:443.
The proxy doesn't play any role in the handshake (and thus with SNI). The TLS handshake effectively happens directly between the client and the end server.
If you're writing a proxy server, all you need to do for allowing your clients to connect to HTTPS servers is read in the CONNECT request, make a connection from the proxy to the end server (given in the CONNECT request), send the client with a 200 OK reply and then forward everything that you read from the client to the server, and vice versa.
RFC 2616 treats CONNECT as a a way to establish a simple tunnel (which it is). There is more about it in RFC 2817, although the rest of RFC 2817 (upgrades to TLS within a non-proxy HTTP connection) is rarely used.
It looks like what you're trying to do is to have the connection between the client (C) and the proxy (P) over TLS. That's fine, but the client won't use CONNECT to connect to external web servers (unless it's a connection to an HTTPS server too).
You're doing everything right. It's Apache that got things wrong. Support for CONNECT over TLS was only added recently (https://issues.apache.org/bugzilla/show_bug.cgi?id=29744) and there's still some things to be ironed out. The issue you're hitting is one of them.
From RFC 2616 (section 14.23):
The Host request-header field specifies the Internet host and port
number of the resource being requested, as obtained from the original
URI given by the user or referring resource (generally an HTTP URL,
as described in section 3.2.2). The Host field value MUST represent
the naming authority of the origin server or gateway given by the
original URL.
My understanding is that you need to copy the address from CONNECT line to HOST line. All in all, the address of the resource is 192.168.1.1, and the fact that you are connecting via example.com doesn't change anything from RFC point of view.
It is quite seldom to see CONNECT Method inside TLS (https). I actually don't know any client who does that (and I would be interested to know who it does, cause I think it is actually a good feature).
Normally the client connects with http (plain tcp) to the proxy and sends the CONNECT method (and host header) to host:443. Then the proxy will make a transparent connection to the endpoint and then the client sends the SSL handshake through.
In this scenario the data is ssl protected "end to end".
The CONNECT method is not really specified, it is only reserved in the HTTP RFC. But typically it is quite simple so it is interoperable. The Method specifies host[:port]. Host: header can simply be ignored. Some additional proxy authentication headers might be needed. When the body of the connection begins no parsing has to happen by the proxy anymore (some do, because they check for valid SSL handshake).