How to handle https traffic using libmproxy? - ssl

I want to implement a proxy server that intercepts both http and https requests. I came across libmproxy (http://mitmproxy.org/doc/scripting/libmproxy.html) that it is SSL-capable. I start with this simplest proxy that just prints the headers of all requests and responses, and forwards them to clients and servers normally.
#!/usr/bin/env python
from libmproxy import controller, proxy
import os
class Master(controller.Master):
def __init__(self, server):
controller.Master.__init__(self, server)
self.stickyhosts = {}
def run(self):
try:
return controller.Master.run(self)
except KeyboardInterrupt:
self.shutdown()
def handle_request(self, msg):
print "handle request.................................................."
print msg.headers
msg.reply()
def handle_response(self, msg):
print "handle response................................................."
print msg.headers
msg.reply()
config = proxy.ProxyConfig(
cacert = os.path.expanduser("~/.mitmproxy/mitmproxy-ca.pem")
)
server = proxy.ProxyServer(config, 1234)
m = Master(server)
m.run()
Then I configure http and ssl proxy in firefox to 127.0.0.1 port 1234. http seems to work fine as I can see all the headers are printed out. However, when the browser sends https requests, the proxy server does not print anything at all, and the browser displays "the connect was interrupted" error.
Further investigation reveals that the https requests go though the proxy server but not controller.Master. I see that proxy.ProxyHandler.establish_ssl() is being called when there is an https request, but the request does not go though controller.Master.handle_request(). Despite that establish_ssl() is called, the browser does not seem to get any response back. I test this with https://www.google.com.
First, how can I make proxy.ProxyHandler works properly with https requests/responses? Second, how can I modify controller.Master so that it can intercept https requests? I'm also open to other tools that I can build a custom http/https proxy server on top of.

You need to install the mitmproxy CA in the browser you are testing with.
Please see details here ("Installing the mitmproxy CA" section):
http://mitmproxy.org/doc/ssl.html
This solved the problem for me.

Related

Python's default SSL certificate context not working in requests method when behind proxy, works fine otherwise

I have the below function in my code, which works perfectly fine when I'm not behind any proxy. In fact, without even mentioning the certifi default CA certificate, it works fine if I pass verify=TRUE, I guess, because it works in the same way.
def reverse_lookup(lat, long):
cafile=certifi.where()
params={'lat' : float(lat), 'lon' : float(long), 'format' : 'json',
'accept-language' : 'en', 'addressdetails' : 1}
response = requests.get("https://nominatim.openstreetmap.org/reverse", params=params, verify=cafile)
#response = requests.get("https://nominatim.openstreetmap.org/reverse", params=params, verify=True) <-- this works as well
result = json.loads(response.text)
return result['address']['country'], result['address']['state'], result['address']['city']
When I run the same code from within my enterprise infrastructure (where I'm behind proxy), I make some minor changes in the code mentioning the proxy as parameter in requests method:
def reverse_lookup(lat, long):
cafile=certifi.where()
proxies = {"https" : "https://myproxy.com"}
params={'lat' : float(lat), 'lon' : float(long), 'format' : 'json',
'accept-language' : 'en', 'addressdetails' : 1}
response = requests.get("https://nominatim.openstreetmap.org/reverse", params=params, verify=cafile, proxies=proxies)
result = json.loads(response.text)
return result['address']['country'], result['address']['state'], result['address']['city']
But it gives me one out of these 3 SSL errors at different times, if I set verify=True or verify=certifi.where():
CERTIFICATE_VERIFY_FAILED
UNKNOWN_PROTOCOL
WRONG_VERSION_NUMBER
Only time it works is when I completely bypass the SSL verification with verify=False
My questions are:
Since I'm sending the https request via proxy, is it ok if I bypass SSL verification ?
How to make the default context of SSL verification work in this case, when I'm behind proxy ?
Any help is appreciated. Code tested in both Python 2.7.15 and 3.9
Since I'm sending the https request via proxy, is it ok if I bypass SSL verification ?
Do you need the protection offered by HTTPS, i.e. encryption of the application data (like passwords, but also the full URL) to protect against sniffing or modifications by a malicious man in the middle? If you don't need the protection, then you can bypass certificate validation.
How to make the default context of SSL verification work in this case, when I'm behind proxy ?
The proxy is doing SSL interception and when doing this issues a new certificate for this site based on an internal CA. If this is expected (i.e. not an attack) then you need to import the CA from the proxy as trusted with verify='proxy-ca.pem'. Your IT department should be able to provide you with the proxy CA.
But it gives me one out of these 3 SSL errors at different times, if I
set verify=True or verify=certifi.where():
CERTIFICATE_VERIFY_FAILED
UNKNOWN_PROTOCOL
WRONG_VERSION_NUMBER
It should only give you CERTIFICATE_VERIFY_FAILED. The two other errors indicate wrong proxy settings, typically setting https_proxy to https://... instead of http://... (which also can be seen in your code).

Unable to access unsecured https with selenium using browsermob proxy

I'm trying to embed browsermob proxy with my selenium (chrome) framework for UI automated testing in order to intercept responses and other networking.
Description :
Selenium webdriver using browsermob proxy and it works just fine - HTTP and secured HTTPS URL's are OK. When I'm trying to navigate to unsecured HTTPS URL I get this chrome error:
ERR_TUNNEL_CONNECTION_FAILED
Here's my python code:
class Browser(object):
display = None
browser = None
def __init__(self, implicitly_wait_seconds=10, is_visible=True, display_size=None, browser_name='chrome'):
if not is_visible:
self.display = Display(display_size)
self.server = Server('/home/erez/Downloads/browsermob-proxy-2.1.4/bin/browsermob-proxy')
self.server.start()
self.proxy = self.server.create_proxy()
self.capabilities = DesiredCapabilities.CHROME
self.proxy.add_to_capabilities(self.capabilities)
self.proxy.new_har("test", options={'captureHeaders': True, 'captureContent': True})
self.start_browser(display_size, implicitly_wait_seconds, browser_name)
def __enter__(self):
return self
def __exit__(self, _type, value, trace):
self.close()
def start_browser(self, display_size, implicitly_wait_seconds=10, browser_name='chrome'):
if browser_name == 'chrome':
chrome_options = Options()
# chrome_options.add_argument("--disable-extensions")
chrome_options.add_experimental_option("excludeSwitches", ["ignore-certificate-errors"])
chrome_options.add_argument("--ssl-version-max")
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument('--proxy-server=%s' % self.proxy.proxy)
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--allow-insecure-localhost')
chrome_options.add_argument('--ignore-urlfetcher-cert-requests')
self.browser = webdriver.Chrome(os.path.dirname(os.path.realpath(__file__)) + "/chromedriver",
chrome_options=chrome_options, desired_capabilities=self.capabilities)
self.browser.implicitly_wait(implicitly_wait_seconds)
You can also call create_proxy with trustAllServers as argument:
self.proxy = self.server.create_proxy(params={'trustAllServers':'true'})
I faced the same problem for SSL proxying using BrowserMob Proxy. For this you have to install a certificate in your browser that has been defined in this link
Go to the bottom of the document, and you will see a section called "SSL support". Install the ca-certificate-rsa.cer in your browser and there would be no issue in SSL proxying anymore.
If installing Browsermobs/test servers certificates won't do the job, like in my case, not the most elegant way, but gets the job done:
You are able to bypass ERR_TUNNEL_CONNECTION_FAILED error by passing a trustAllServers-parameter to the proxy instance, which will disable the certificate verification for upstream servers. Unfortunately, for as far as I know, this functionality has not been implemented in the Browsermobs Python wrapper.
However, you can start the proxy with the parameter via Browsermobs REST API (see section REST API # https://github.com/lightbody/browsermob-proxy/blob/master/README.md). In case of Python, Requests might be the way to go.
Here's a snippet:
import json
import requests
from browsermobproxy import Server
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
# Start the proxy via python
server = Server('/path_to/bin/browsermob-proxy')
server.start()
# Start proxy instance with trustAllServers set to true, store returned port number
r = requests.post('http://localhost:8080/proxy', data = {'trustAllServers':'true'})
port = json.loads(r.text)['port']
# Start Chromedriver, pass on the proxy
chrome_options = Options()
chrome_options.add_argument('--proxy-server=localhost:%s' % port)
driver = webdriver.Chrome('/path_to/selenium/chromedriver', chrome_options=chrome_options)
# Get a site with untrusted cert
driver.get('https://site_with_untrusted_cert')
Also, if you need to access HAR-data, you'll need to do that also trough the REST API:
requests.put('http://localhost:8080/proxy/%s/har' % port, data = {'captureContent':'true'})
r = requests.get('http://localhost:8080/proxy/%s/har' % port)
Of course, since you're disabling safety features, this parameter should be used for limited testing purposes only.
Try use this
self.capabilities['acceptSslCerts'] = True

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

PingAccess issues with proxying target sites with HTTP/HTTPS mix

I'm trying to get PingAccess set up as a proxy (let's call the PA host
pagateway) for a couple of applications that share a Web Session. I want all access to come via the PA pagateway and use HTTPS, but the back end systems are not HTTPS.
I have two sites defined, app1:8080 and app2:8080. Both are set to "secure" = no and "use target host header" = yes.
I have listeners defined on ports 5000 and 5001 that are both set to "secure" = yes.
The first problem I found is that when I access either app in this way (e.g. going to https://pagateway:5000), after successfully authenticating with PingFederate I end up getting redirected to the actual underlying host name (e.g. http://app1:8080), meaning any subsequent interactions with the app are not via PingAccess. For users outside the network they wouldn't even be able to do that because the app1 host wouldn't even be visible or accessible.
I thought maybe I needed to turn off "Use target host header" to false but Chrome prompts me to download a file that contains NAK, ETX, ETX, NUL, STX, STX codes, and in the PA logs I get an SSL error:
2015-11-20 11:13:33,718 DEBUG [6a5KYac2dnnY0ZpIl-3GNA] com.pingidentity.pa.core.transport.http.HttpServerHandler:180 - IOException reading sourceSocket
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
...
I'm unsure exactly which part of the process the SSL error is coming from (between browser and pagateway, or pagateway and app1). I'm guessing maybe app1 is having trouble with the unexpected host header...
In another variation I turned off SSL on the PA listener (I also had to change the PingAccess call-back URL in the PingFederate client settings to be http). But when I accessed it via http://pagateway:5000 I got a generic PingFederate error message in the browser and a different error in the PA logs:
2015-11-20 11:37:25,764 DEBUG [DBxHnFjViCgLYgYb-IrfqQ] com.pingidentity.pa.core.interceptor.flow.InterceptorFlowController:148 - Invoking request handler: Scheme Validation for Request to [pagateway:5000] [/]
2015-11-20 11:37:25,764 DEBUG [DBxHnFjViCgLYgYb-IrfqQ] com.pingidentity.pa.core.interceptor.flow.InterceptorFlowController:200 - Exception caught. Invoking abort handlers
com.pingidentity.pa.sdk.policy.AccessException: Invalid request protocol.
at com.pingidentity.pa.core.interceptor.SchemeValidationInterceptor.handleRequest(SchemeValidationInterceptor.java:61)
Does anyone have any idea what I'm doing wrong? I'm kind of surprised about the redirection to the actual server name, to be honest, but after that I'm stumped about where to go from here.
Any help would be appreciated.
Have you contacted our support on this? It's sounding like something that will need to be dug into a bit deeper - but some high level suggestions I can make:
Take a look at a browser trace to determine when the redirect is happening to the backend site. Usually this is because there's a Location header in a redirect from the backend web server that (by nature) is an absolute URL but pointing to it instead of the externally facing hostname.
A common solution to this is setting Target Host Header to False - so it will receive the request unmodified from the browser, and the backend server should know to represent itself as that (if it behaves nicely behind a proxy).
If the backend server can't do that (which it sounds like it can't) - you should look at assigning rewriting rules to that application. More details on them are available here: https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=reference%2Fui%2Fpa_c_Rewrite_Rules_Overview.html. The "Rewrite Response Header Rule" in particular will rewrite Location headers in HTTP redirects.
FYI - The "Invalid request protocol." error you're seeing at bottom of your description could be due to a "Require HTTPS" flag on your defined Application.
Do you have the same issue if you add a trailing slash at the end (https://pagateway:5000/webapp/)? Your application server will rewrite the URL based on what it thinks is the true host. This is to get around some security related issues around directory listing.
Which application server are you using? All app servers are unique, but I'll provide instructions on how to resolve this with Tomcat.
Add a global rule that forces the application server to use the external facing host name. Here is a sample Groovy script:
def header = exc?.request?.header;
header?.setHost("pf.pingdemo.com:443");
anything();
In Tomcat's server.xml, add scheme="https" to the connection:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="443" scheme="https" />
Cheers,
Tam

2-way SSL with CherryPy

From CherryPy 3.0 and onwards, one-way SSL can be turned on simply by pointing to the server certificate and private key, like this:
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello SSL World!"
index.exposed = True
cherrypy.server.ssl_certificate = "keys/server.crt"
cherrypy.server.ssl_private_key = "keys/server.crtkey"
cherrypy.quickstart(HelloWorld())
This enables clients to validate the server's authenticity. Does anyone know whether CherryPy supports 2-way ssl, e.g. where the server can also check client authenticity by validating a client certificate?
If yes, could anyone give an example how is that done? Or post a reference to an example?
It doesn't out of the box. You'd have to patch the wsgiserver to provide that feature. There is a ticket (and patches) in progress at http://www.cherrypy.org/ticket/1001.
I have been looking for the same thing. I know there are some patches on the CherryPy site.
I also found the following at CherryPy SSL Client Authentication. I haven't compared this vs the CherryPy patches but maybe the info will be helpful.
We recently needed to develop a quick
but resilient REST application and
found that CherryPy suited our needs
better than other Python networking
frameworks, like Twisted.
Unfortunately, its simplicity lacked a
key feature we needed, Server/Client
SSL certificate validation. Therefore
we spent a few hours writing a few
quick modifications to the current
release, 3.1.2. The following code
snippets are the modifications we
made:
cherrypy/_cpserver.py
## -55,7 +55,6 ## instance = None ssl_certificate = None ssl_private_key
= None
+ ssl_ca_certificate = None nodelay = True
def __init__(self):
cherrypy/wsgiserver/__init__.py
## -1480,6 +1480,7 ##
# Paths to certificate and private key files ssl_certificate = None ssl_private_key = None
+ ssl_ca_certificate = None
def __init__(self, bind_addr, wsgi_app, numthreads=10, server_name=None, max=-1, request_queue_size=5, timeout=10, shutdown_timeout=5):
## -1619,7 +1620,9 ##
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) if self.nodelay: self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
- if self.ssl_certificate and self.ssl_private_key:
+ if self.ssl_certificate and self.ssl_private_key and \
+ self.ssl_ca_certificate:
+ if SSL is None: raise ImportError("You must install pyOpenSSL to use HTTPS.")
## -1627,6 +1630,11 ## ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file(self.ssl_private_key) ctx.use_certificate_file(self.ssl_certificate)
+ x509 = crypto.load_certificate(crypto.FILETYPE_PEM,
+ open(self.ssl_ca_certificate).read())
+ store = ctx.get_cert_store()
+ store.add_cert(x509)
+ ctx.set_verify(SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT, lambda *x:True) self.socket = SSLConnection(ctx, self.socket) self.populate_ssl_environ()
The above patches require the
inclusion of a new configuration
option inside of the CherryPy server
configuration,
server.ssl_ca_certificate. This
option identifies the certificate
authority file that connecting clients
will be validated against, if the
client does not present a valid client
certificate it will close the
connection immediately.
Our solution has advantages and
disadvantages, the primary advantage
being if the connecting client doesn’t
present a valid certificate it’s
connection is immediately closed.
This is good for security concerns as
it does not permit the client any
access into the CherryPy application
stack. However, since the restriction
is done at the socket level the
CherryPy application can never see the
client connecting and hence the
solution is somewhat inflexible.
An optimal solution would allow the
client to connect to the CherryPy
socket and send the client certificate
up into the application stack. Then a
custom CherryPy Tool would validate
the certificate inside of the
application stack and close the
connection if necessary; unfortunately
because of the structure of CherryPy’s
pyOpenSSL implementation it is
difficult to retrieve the client
certificate inside of the application
stack.
Of course the patches above should
only be used at your own risk. If you
come up with a better solution please
let us know.
If the current version of CherryPy does not support client certificate verification, it is possible to configure CherryPy to listen to 127.0.0.1:80, install HAProxy to listen to 443 and verify client side certificates and to forward traffic to 127.0.0.1:80
HAProxy is simple, light, fast and reliable.
An example of HAProxy configuration