How do you enable SSL sessions for your HTTPS service? - ssl

Following on from the best answer here:
How much overhead does SSL impose?
Is there a way to optimise SSL beyond a simple apache SSL install?
From the best answer given on that page I infer that there is some way to set-up persistent SSL sessions over multiple calls (where there is less handshake overhead). Is that correct?
If so, what's the best resource to learn about configuring the server to work that way?

SSL session caching is one optimization, which you can configure for Apache by looking at the discussion here. Look at the SSLSessionCache directive and related.
This will boost your performance for usage patterns that have the same client hitting the server multiple times within the session timeout period. However, when the pattern tends more toward one server hit per client for numerous clients, you won't see any speedups.

Related

Which proxy mode to use if host company terminates TLS on reverse proxy

Friendly Disclaimer: I am new to working with Keycloak and IdP in general. So it's likely that I use incorrect terminology and/or am more confused than I think I am. Corrections are gratefully accepted.
My question is conceptual.
I have a TLS certificate that is terminated on my host machine by my host company. My reverse proxy (Traefik) is picking up that certificate.
Which of the following proxy modes should I use now to be able to deploy Keycloak to production: edge, reencrypt or passthrough? (see here for relevant documentation)
I can pretty much rule out passthrough, because as I wrote, the TLS certificate is terminated on the server. But I am unsure if I have to bring my own certificate and reencrypt or if it is considered safe to go along with edge?
I have done my best to keep this question short and general. However, I am happy to share configurations or further details if needed.
As far as I know, most organizations consider a request to be safe when the proxy validated and terminated the TLS. It also removes the performance overhead (depends on your load). Unless your organization is going for Zero Trust for its internal network, using the edge should be totally acceptable.

Should I use nginx+uwsgi or apache+modwsgi?

Looking at the CKAN deployment documentation, there are several web server options:
Apache with the modwsgi Apache module proxied with Nginx for caching
Apache with the modwsgi Apache module
Apache with paster and reverse proxy
Nginx with paster and reverse proxy
Nginx with uwsgi
I'm wondering what the merits are of using Apache/modwsgi over Nginx/uwsgi, and how much value Nginx's proxying adds.
The core question, I guess, is if I wanted to avoid using two separate web servers in a single installation, what should I consider when choosing one or the other?
The CKAN Tech Team document one particular config: apache/modwsgi + nginx reverse proxy. They encourage people to use that, so that when difficulties occur we can fix them as a community.
I'm not clued up enough to give technical arguments between uwsgi and modwsgi. I think there are some CKAN sites on uwsgi and it being more modern there may be some technical advantages. However the installs I've worked with have mostly been apache/modwsgi + nginx reverse proxy. That's probably more down to familiarity and the blessing of the CKAN tech team than anything else.
However I believe nginx is better than apache2 for SSL/TLS termination. We found it was far simpler to configure SSL/TLS, with plenty of new best practices to keep up with in the past few years. And last time I looked several years ago there was an argument that the asynchronous design avoided e.g. slow loris attacks. So I think having nginx on your front-end makes a lot of sense.
You suggest having two HTTP servers is too much, but I think nginx is pretty low overhead and isn't usually a concern.
paster is a toy - no-one uses it for servers.

Nginx metrics, SSL negotiation time metrics

I am trying find if there is any module or way to get metrics from NGINX and especially about SSL connection (Negotiation time etc.)
I ve searched also in access log but with no luck.
I ve also search to do that using somehow tcpdump to packet analysis
Is there anyway to do that from serve side to take those metrics?
Not sure if this will answer your question but I hope that it can help...
Here are two web pages that I found that provide general information on Nginx metrics (including modules for metrics):
How to Monitor Nginx: The Essential Guide.
An In-Depth Guide to Nginx Metrics.
The first of these includes a section on "SSL Certificate expiration". But monitoring for SSL expiration dates and/or alerts for that seems limited in regards to the SSL connection metrics portion of your question.
On my searching for more SSL specific metric information for Nginx, I came across Module ngx_http_status_module. In the "SSL" subsection of the "Data" section of the page, I found mention of the following SSL statistics:
handshakes
The total number of successful SSL handshakes.
handshakes_failed
The total number of failed SSL handshakes.
session_reuses
The total number of session reuses during SSL handshake.
I also found a paper Nginx SSL Performance that looked interesting to me in regards to Nginx SSL metrics albeit more from the perspective of a basis for what kinds of performance one can expect. It does however point out using openssl speed to gather metrics. Of course this presumes that Nginx was built with OpenSSL as the SSL library.
That leads me to the SSL library that your Nginx is built with. Whether it's OpenSSL or something else, it should have support for logging details. Here's a StackOverflow Q & A that's related to logging SSL details: nginx - log SSL handshake failures. While it doesn't sound like logs were the way you primarily wanted to get metrics from, it does seem like another resource even if you have to write some code to analyze them more conveniently.
Along the lines of analysis, I found the web page Debugging SSL Problems. While it's written with Apache's web server in mind, it does look like it would have helpful suggestions for analyzing and debugging Nginx SSL operations too.
As far as ways that perhaps you can get better answer(s), perhaps asking on the Server Fault Q & A site would help. You'll need to figure out how best to do that however as I recall seeing cross-posting being discouraged. Perhaps this Q & A can just be moved over to there since that seems like a more appropriate site for this. I don't know though.
Incidentally, I've written Nginx modules before and this question got more interesting to me as I dug into answering it.

What are alternatives to secure a web-server other than firewall

I'm doing a network security course and trying to wrap my head around all the concepts. One of which is:
What technology other than firewall can be used to allow only a specific customers while block some other customers? Why is firewall not suitable?
During the course, I've been learning about all the security tools such as: firewall (static, dynamic, DPI), Proxy, VPN, Tunnel, all sorts of IDS (signature, anomaly, darknet/greynet and honeypot) then mod_security to secure apache but still puzzled by this question.
Any insights here will be greatly appreciated.
A firewall implied that you block based on the customer IP address. This may work if the customer has his own range of addresses and all requests from him are legitimate.
It gets complicated when he is with a large cloud provider who who provide a wide range of possible IPs, including IPs from other people.
For an application one good solution would be to use client-side certificates. In that case, during the TLS handshake (the process of putting in place a TLS (was: SSL) tunnel), the server will request the client to provide a certificate he (the server) trusts. Failure to providing one will break the connection.
This way, you can distribute the certificate to the clients you want to be able to reach your service and others will be rejected. This solution is better as it uses technologies which were developed exactly to solve this problem. The drawback is that you have to maintain and distribute the certificates (and usually run a PKI).

Medium sized website: Transition to HTTPS, Apache and reverse proxy

I have a medium sized website called algebra.com. As of today, it is ranked 900th website in US in Quantcast ratings.
At the peak of its usage, during weekday evenings, it serves over 120-150 queries for objects per second. Almost all objects, INCLUDING IMAGES, are dynamically generated.
It has 7.5 million page views per month.
It is server by Apache2 on Ubuntu and is supplemented by Perlbal reverse proxy, which helps reduce the number of apache slots/child processes in use.
I spent an inordinate amount of time working on performance for HTTP and the result is a fairly well functioning website.
Now that the times call for transition to HTTPS (fully justified here, as I have logons and registered users), I want to make sure that I do not end up with a disaster.
I am afraid, however, that I may end up with a performance nightmare, as HTTPS sessions last longer and I am not sure whether a reverse proxy can help as much as it did with HTTP.
Secondly, I want to make sure that I will have enough CPU capacity to handle HTTPS traffic.
Again, this is not a small website with a few hits per second, we are talking 100+ hits per second.
Additionally, I run multiple sites on one server.
For example, can I have a reverse proxy, that supports several virtual domains on one IP (SNI), and translates HTTPS traffic into HTTP, so that I do not have to encrypt twice (once by apache for the proxy, and once by the proxy for the client browser)?
What is the "best practices approach" to have multiple websites, some large, served by a mix of HTTP and HTTPS?
Maybe I can continue running perlbal on port 80, and run nginx on port 443? Can nginx be configured as a reverse proxy for multiple HTTPS sites?
You really need to load test this, and no one can give a definitive answer other than that.
I would offer the following pieces of advice though:
First up Stack overflow is really for programming questions. This question probably belongs on the sister site www.serverfault.com.
Https processing is, IMHO, not an issue for modern hardware unless you are encrypting large volumes of traffic (e.g. video streaming). Especially with proper caching and other performance tuning that I presume you've already done from what you say in your question. However not dealt with a site of your traffic so it could become an issue there.
There will be a small hit to clients as the negotiate the https session on initial connection. This is in the order of a few hundred milliseconds, will only happen on initial connection for each session, is unlikely to be noticed by most people, but it is there.
There are several things you can do to optimise https including choosing fast ciphers, implementing session resumption (two methods for this - and this can get complicated on load balanced sites). Ssllabs runs an excellent https tester to check your set up, Mozilla has some great documentation and advice, or you could check out my own blog post on this.
As to whether you terminate https at your end point (proxy/load balanced) that's very much up to you. Yes there will be a performance hit if you re-encrypt to https again to connect to your actual server. Most proxy servers also allow you to just pass through the https traffic to your main server so you only decrypt once but then you lose the original IP address from your webserver logs which can be useful. It also depends on if you access your web server directly at all? For example at my company we don't go through the load balanced for internal traffic so we do enable https on the web server as well and make the LoadBalancer re-encrypt to connect to that so we can view the site over https.
Other things to be aware of:
You could see an SEO hit during migration. Make sure you redirect all traffic, tell Google Search Console your preferred site (http or https), update your sitemap and all links (or make them relative).
You need to be aware of insecure content issues. All resources (e.g. css, javascript and images) need to be served over https or you will get browsers warnings and refuse to use those resources. HSTS can help with links on your own domain for those browsers that support HSTS, and CSP can also help (either to report on them or to automatically upgrade them - for browsers that support upgrade insecure requests).
Moving to https-only does take a bit of effort but it's once off and after that it makes your site so much easier to manage than trying to maintain two versions of same site. The web is moving to https more and more - and if you have (or are planning to have) logged in areas then you have no choice as you should 100% not use http for this. Google gives a slight ranking boost to https sites (though it's apparently quite small so shouldn't be your main reason to move), and have even talked about actively showing http sites as insecure. Better to be ahead of the curve IMHO and make the move now.
Hope that's useful.