Centralizing outgoing two-way SSL connections - ssl

We are currently using Apache to handle incoming SSL requests. These are two-way SSL connections. Apache accepts the https connection and pass the request on as http connection to the application server. This works well for us.
We would like to use the same kind of centralized mechanism for outgoing two-way SSL connections. Is there a way do this with Apache or another product? To complicate things the client certificate needed to identify out client can vary depending on the destination.
In short:
- Internal clients connect through http to Apache or another product.
- Apache or another product knows based on a rule (?) that a two-way ssl connection is required and sets this up with the destination.
- Depending on the destination the correct certificate is sent to identify our client.
Regards,
Nidkil

What you're talking about is, or course, an HTTP proxy server. In the first scenario you are using it as a transparent proxy to provide SSL support for connections to a set of web pages. In the second scenario you want to use it to provide connections to secure-only pages on behalf of clients speaking HTTP.
You can do this with the Squid proxy, which is free and open-source, provided that your machine sits between the clients and the Internet. Look for "SSLBump". You do need a certificate which the clients would consider valid for all web pages to be accessed (otherwise they will notice what you are doing, which is basically a man-in-the-middle attack).
However, I would strongly recommend against this - if a site requires SSL, it is likely to do so for a reason. It is almost certainly not OK to have internal clients connecting to an online banking site and have you bumping down their encryption so that you can monitor their traffic or whatever...

Related

How to prevent SSL Proxying for https site?

I'm serving my site through nginx. For securing it, I have added ssl certificate and made it compatible with https protocol.
Now when I do request data from the site through browser while keeping ssl proxying on, whole request body and response body are showing there, so there is some loophole in my configuration and if it's not a loophole, I want it to be like giant company's site - facebook, apple etc. Where these ssl proxy tool can not parse the request and response.
If the client doesnt explicit show itself as a proxy (aka via X-Forwarded headers), is very hard to know for a server if any connection establishes proxied, Of course, out there are sophisticated methods to find these connections, like blacklists with common proxy sites, AI traffic algorithms, etc. but you will need massive amounts of data (that giant companies have) or specialized traffic services like cloudflare.

openshift ssl edge termination risk

I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.

Twisted IMAP proxy that collects mails

I was asked to write an IMAP proxy that would act as a 'real' IMAP server, except that it would translate all requests from clients to the backend IMAP server. In this setup, the client connects to proxy directly and doesn't necessarily know about the backend. The idea is to have the proxy monitor all the mail the client fetches.
I have been looking into Twisted for accomplishing this task, because Twisted has a proxy module and it also has implementations of IMAP4 for client and server.
I would like to know if there are any difficulties with secure connections that one should be aware of. The program must monitor all traffic, thus it must maintain two secure connections with two different certificates. Is this feasible if the proxy has a certificate that the client trusts? Are there any pitfalls?
Also, is it possible to use the proxy module for this? I've seen a simple IMAP proxy written with this module, but the docs say it's for HTTP proxying.

WebSockets and HTTPS load balancers

I cannot find authoritative information about how WSS interacts with HTTPS proxies and load balancers.
I have a load balancer that handles the SSL (SSL off-loading), and two web servers that contains my web applications and handle the requests in plain HTTP. Therefore, the customers issue HTTPS requests, but my web servers get HTTP requests, since the load balancer takes care of the SSL certificates handling.
I am developing now an application that will expose WebSockets and SSL is required. But I have no clear idea about what will happen when the load balancer gets a secure HTTPS handshake for WSS.
Will it just relay the request as normal handshake to the web server?
WebSockets use a "Upgrade:WebSocket" HTTP header that is only valid for the first hop (as there is also "Connection:Upgrade", will this be a problem?
Cheers.
loadbalancers can normally deal with websockets - also including ssl offloading shouldn't be an issue - BUT you have to configure the LB to take care about HTTP and not only to take care about balancing the traffic based on Layer 3 infos - therefore, you have to ensure that the LB has to take care about the session state.
i don't know what LB you are using - but e.g. with F5 LBs you just have to assign a http profile to loadbalance websocket based apps.
If you want to do ssl offloading additionally - just assign an ssl client profile to your virtual server.
http://support.f5.com/kb/en-us/solutions/public/14000/700/sol14754.html
I would have thought SSL-terminating LBs handle WebSockets as well, but I had to realize I was mistaken, once I tried. So the answer for F5 LBs, as of January 2013, is: It won't work. The gist of the answer I was given over at serverfault:
As of December of 2012, BIG-IP doesn't support SSL offload of WebSocket traffic.

Do web servers need to verify browser client certificates?

I'm implementing an SSL layer for a web server project. I'm using polarSSL, though I think this question is a general SSL question.
When I get a connection to my server from a client I configure the SSL protcol like this:
ssl_set_endpoint( &mSsl, SSL_IS_SERVER );
ssl_set_authmode( &mSsl, SSL_VERIFY_NONE );
E.g. I'm not verifying the connection from the client. Do I need to do this?
Most browsers don't have client side certificates - though some do (I think). Is there any need or advantage for the server to verify the client? This is for a service where I would happily serve the data to a client that had no client side certificate at all.
Client-side authentication in SSL/TLS is used when it's required for the server to know its client. For example, it's widely used in banking, to access custom corporate servers etc.
In opposite, the common web server is intended to serve wide audience and not care about who's coming in. So client-side authentication is not used unless you know that you need it.