Improving NGINX Throughput on a Single SSL Thread - ssl

We are configuring a file delivery server using nginx. The server will be serving large files over HTTPS.
We have run into an issue where we can only achieve around 25MB/s on a single HTTPS thread.
We have tested using a non-HTTPS single download thread (http://) and can achieve full line speed (1Gb/s) at around 120MB/s.
CPU is not anywhere near max encrypting the transfers. We have PLENTY of processing power spare.
We are using aio threads and directio for the file delivery system with large output buffers.
Here is an example of our config:
server {
sendfile off;
directio 512;
aio threads;
output_buffers 1 2m;
server_name downloads.oursite.com;
listen 1.1.1.1:443 ssl;
ssl_certificate /volume1/Backups/nginxserver/ourdownloads.cer;
ssl_certificate_key /volume1/Backups/nginxserver/ourdownloads.key;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
location = / {
rewrite ^ https://oursite.com/downloads.html permanent;
}
error_page 404 /404.html;
location = /404.html {
root /volume1/Backups/nginxserver/pages/;
internal;
}
location / {
root /volume1/downloads.oursite.com;
limit_conn_status 429;
limit_conn alpha 50;
}
}
Does anybody know how we can achieve faster transfer speeds for a single thread over an SSL connection? What is causing this? Thank you for your tips, suggestions, advice and help in advance.

It seems our CPU is to blame. No built-in AES encryption support.
admin#RackStation:/$ openssl speed -evp aes-128-cbc
Doing aes-128-cbc for 3s on 16 size blocks: 5462473 aes-128-cbc's in 2.97s
Doing aes-128-cbc for 3s on 64 size blocks: 1516211 aes-128-cbc's in 2.97s
Doing aes-128-cbc for 3s on 256 size blocks: 392944 aes-128-cbc's in 2.97s
Doing aes-128-cbc for 3s on 1024 size blocks: 98875 aes-128-cbc's in 2.98s
Doing aes-128-cbc for 3s on 8192 size blocks: 12479 aes-128-cbc's in 2.97s
OpenSSL 1.0.2o-fips 27 Mar 2018
built on: reproducible build, date unspecified
options:bn(64,64) rc4(16x,int) des(idx,cisc,16,int) aes(partial) blowfish(idx)
compiler: information not available
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
aes-128-cbc 29427.46k 32672.56k 33869.92k 33975.84k 34420.19k

Related

Nginx https very high connect time and is much slower (32 times) than Nginx http & 12 times slower than Apache https

I have an Angular website with static assets of around 1.5 mb and gzipped it is around 400 kb, I have nginx as my webserver & reverse proxy to the API server, when I test nginx with Apache benchmark tool, I find huge drop in performance if I test the https site compared to the http (https is 10 times slower) & the cpu utilization & memory is not high at all (cpu 30% memory is only 1 mb!!)
I have been searching for hours & tried all possible enhancements but none worked, as far as I have read https shall not be that much slower on modern web servers (http around 1500 req/sec & https is 46 req/sec for nginx), this is mostly from the Nginx https very high connect time but I have no clue how to solve this.
Can someone advise how to improve this?
(Also to my surprise, Apache performs much better in both cases but doesn't respond if I set concurrent connections to more than 200) & this is not nginx vs apache I am just stating my situation.
Important note:
I am not comparing the 2 web servers that is not the point of this site, but generally they have comparable performance so if https in nginx is 10 times slower than Apache I feel that something is wrong in my Nginx configuration & I want to fix it.
All test are on my windows machine i7 & 16 gb ram.
Nginx http only:
C:\Apache24\bin>ab -n 5000 -c 200 http://localhost:8100/abc/index.html?param=abc
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Server Software: nginx/1.15.4
Server Hostname: localhost
Server Port: 8100
Document Path: /abc/index.html?param=abc
Document Length: 1099 bytes
Concurrency Level: 200
Time taken for tests: 3.246 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 6665000 bytes
HTML transferred: 5495000 bytes
Requests per second: 1540.32 [#/sec] (mean)
Time per request: 129.843 [ms] (mean)
Time per request: 0.649 [ms] (mean, across all concurrent requests)
Transfer rate: 2005.12 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.3 0 16
Processing: 31 87 12.8 94 124
Waiting: 0 87 13.7 94 124
Total: 31 87 12.8 94 124
Percentage of the requests served within a certain time (ms)
50% 94
66% 94
75% 94
80% 94
90% 99
95% 109
98% 109
99% 113
100% 124 (longest request)
Nginx https (with http2 enabled)
C:\Apache24\bin>abs -n 5000 -c 200 https://localhost:8200/abc/index.html?param=abc
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Server Software: nginx/1.15.4
Server Hostname: localhost
Server Port: 8200
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
TLS Server Name: localhost
Document Path: /abc/index.html?param=abc
Document Length: 1099 bytes
Concurrency Level: 200
Time taken for tests: 108.985 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 6780000 bytes
HTML transferred: 5495000 bytes
Requests per second: 45.88 [#/sec] (mean)
Time per request: 4359.386 [ms] (mean)
Time per request: 21.797 [ms] (mean, across all concurrent requests)
Transfer rate: 60.75 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 16 4201 506.8 4251 4755
Processing: 0 32 12.6 31 88
Waiting: 0 32 12.6 31 88
Total: 62 4232 506.9 4283 4800
Percentage of the requests served within a certain time (ms)
50% 4283
66% 4342
75% 4413
80% 4439
90% 4484
95% 4547
98% 4694
99% 4727
100% 4800 (longest request)
Compared to Apache http (here CPU is around 90 to 100% utilized)
C:\Apache24\bin>ab -n 5000 -c 200 http://localhost:6200/abc/index.html?param=abc
Server Software: Apache/2.4.33
Server Hostname: localhost
Server Port: 6200
Document Path: /abc/index.html?param=abc
Document Length: 1099 bytes
Concurrency Level: 200
Time taken for tests: 1.781 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 6810000 bytes
HTML transferred: 5495000 bytes
Requests per second: 2806.99 [#/sec] (mean)
Time per request: 71.251 [ms] (mean)
Time per request: 0.356 [ms] (mean, across all concurrent requests)
Transfer rate: 3733.51 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.6 0 16
Processing: 16 69 16.0 63 125
Waiting: 0 57 16.0 63 125
Total: 16 69 16.0 63 125
Percentage of the requests served within a certain time (ms)
50% 63
66% 78
75% 78
80% 78
90% 94
95% 94
98% 94
99% 109
100% 125 (longest request)
And Apache https is as follows (http 1.1) & note that http 1.1 in nginx didn't improve its performance:
C:\Apache24\bin>abs -n 5000 -c 200 https://localhost:7200/abc/index.html?param=abc
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Server Software: Apache/2.4.33
Server Hostname: localhost
Server Port: 7200
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
TLS Server Name: localhost
Document Path: /abc/index.html?param=abc
Document Length: 1099 bytes
Concurrency Level: 200
Time taken for tests: 8.747 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 6810000 bytes
HTML transferred: 5495000 bytes
Requests per second: 571.60 [#/sec] (mean)
Time per request: 349.894 [ms] (mean)
Time per request: 1.749 [ms] (mean, across all concurrent requests)
Transfer rate: 760.27 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 198 42.7 188 391
Processing: 62 145 39.1 140 385
Waiting: 0 76 28.3 78 250
Total: 62 343 63.0 331 615
Percentage of the requests served within a certain time (ms)
50% 331
66% 369
75% 380
80% 389
90% 422
95% 465
98% 500
99% 536
100% 615 (longest request)
My nginx configuration:
worker_processes auto;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8100;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 8200 ssl http2;
server_name localhost;
ssl_certificate C:/nginx-1.13.12/conf/server.crt;
ssl_certificate_key C:/nginx-1.13.12/conf/server.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
gzip on;
gzip_comp_level 1;
gzip_vary on;
gzip_types
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/json
application/xml
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
gzip_static on;
location /ipo_reits/ {
root html;
index index.html index.htm;
## here we redirect to the homepage in case of nginx 404
try_files $uri $uri/ /ipo_reits/index.html;
# error_page 404 =301 /;
}
location /api/ {
proxy_pass https://localhost:7001/;
}
}
}
I hope that this will help someone else, It seems that is related to nginx on windows issue, I wrongly assumed that the performance of nginx on windows & linux is similar but clearly it is not.
I have tried the benchmark again with nginx on Linux on the same machine & got excellent performance as shown below
ab -n 5000 -c 200 https://localhost:8200/abc/index?param=abc
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Finished 5000 requests
Server Software: nginx/1.10.3
Server Hostname: localhost
Server Port: 8200
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /abc/index?param=abc
Document Length: 1099 bytes
Concurrency Level: 200
Time taken for tests: 4.179 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 6825000 bytes
HTML transferred: 5495000 bytes
Requests per second: 1196.37 [#/sec] (mean)
Time per request: 167.173 [ms] (mean)
Time per request: 0.836 [ms] (mean, across all concurrent requests)
Transfer rate: 1594.77 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 15 141 185.3 106 1322
Processing: 1 22 13.1 20 82
Waiting: 1 14 9.5 13 81
Total: 24 163 185.7 128 1351
Percentage of the requests served within a certain time (ms)
50% 128
66% 142
75% 148
80% 155
90% 208
95% 260
98% 1100
99% 1164
100% 1351 (longest request)
Also for sustained higher load & concurrency, performance was still the same:
ab -n 25000 -c 1000 https://localhost:8200/abc/index?param=abc
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Benchmarking localhost (be patient)
Completed 2500 requests
....
Completed 25000 requests
Finished 25000 requests
Server Software: nginx/1.10.3
Server Hostname: localhost
Server Port: 8200
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /abc/index?param=abc
Document Length: 1099 bytes
Concurrency Level: 1000
Time taken for tests: 20.149 seconds
Complete requests: 25000
Failed requests: 0
Total transferred: 34125000 bytes
HTML transferred: 27475000 bytes
Requests per second: 1240.76 [#/sec] (mean)
Time per request: 805.960 [ms] (mean)
Time per request: 0.806 [ms] (mean, across all concurrent requests)
Transfer rate: 1653.94 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 7 687 711.8 492 7694
Processing: 2 89 50.1 81 516
Waiting: 0 57 48.9 41 509
Total: 15 776 723.4 600 7756
Percentage of the requests served within a certain time (ms)
50% 600
66% 812
75% 1095
80% 1186
90% 1397
95% 1631
98% 3183
99% 3442
100% 7756 (longest request)
Avoid Old Cipher Suites
HTTP/2 has a huge blacklist of old and insecure ciphers, so we must avoid them. Cipher suites are a bunch of cryptographic algorithms, which describe how the transferring data should be encrypted.
We will use a really popular cipher set, whose security was approved by Internet giants like CloudFlare. It does not allow the usage of MD5 encryption (which was known as insecure since 1996, but despite this fact, its use is widespread even to this day).
Open the following configuration file:
sudo nano /etc/nginx/nginx.conf
Add this line after ssl_prefer_server_ciphers on;.
/etc/nginx/nginx.conf
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
Save the file, and exit the text editor.
Once again, check the configuration for syntax errors:
sudo nginx -t

How does one specify a particular cipher suite for a nginx docker instance?

I am running a newly built discourse docker image on Google Compute Engine. I converted that to use https using letsencrypt following the walk through and I get an A+ rating from ssllabs. However the scripting agent I'm using doesn't support either of the two TLS 1.0 cipher suites enabled [TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] and I'd like to add TLS-DHE-RSA-WITH-AES-256-CBC-SHA which is supported by the open source rebol3 fork ren-c.
I've modified my web.ssl.template.yml file from
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:\
ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:\
ECDHE-RSA-AES256-SHA;
to
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:\
ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:\
ECDHE-RSA-AES256-SHA:TLS-DHE-RSA-WITH-AES-256-CBC-SHA;
and rebuilt the app using
sudo ./launcher rebuild app
but this doesn't alter the cipher_suites available.
I'm now wondering if I have to alter the nginx.conf directly, wherever that is, instead of asking the discourse build script to do it ...
Changing the line in /var/discourse/templates/web.ssl.template.yml
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:\
ECDHE-RSA-AES128-SHA256$RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA;
to
ssl_ciphers 'HIGH:!aNULL:!MD5';
changes the supported TLS 1.0 suites to
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp384r1 (eq. 7680 bits RSA) FS 256
TLS_RSA_WITH_AES_256_CBC_SHA (0x35) 256
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (0x84) 256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp384r1 (eq. 7680 bits RSA) FS 128
TLS_RSA_WITH_AES_128_CBC_SHA (0x2f) 128
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (0x41)
and still gives an A+ rating from ssllabs.
mkdir -p containers/templates
cp templates/web.ssl.template.yml containers/templates
fuss with the file
add containers/templates/web.ssl.template.yml to you app.yml file in the templates section
profit

How can I increase timeouts of AWS worker tier instances?

I am trying to run background processes on an Elastic Beanstalk single worker instance within a Docker container and have not been able execute a request/job for longer than 60 seconds without getting a 504 timeout.
Looking at the log files provided by AWS the issue begins with the following error;
[error] 2567#0: *37 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /queue/work HTTP/1.1", upstream: "http://172.17.0.3:80/queue/", host: "localhost"
Does anyone know if it possible to increase the limit from 60 seconds to a longer period as I would like to generate some reports which will take 3 to 4 minutes to process.
I have increased the NGINX timeout settings within .ebextensions/nginx-timeout.config without any results.
files:
"/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf": mode: "000644"
owner: root
group: root
content: |
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
commands:
"00nginx-create-proxy-timeout":
command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
I have also increased the PHP max_execution_time within a custom php.ini
max_execution_time = 600
Any help will be greatly appreciated.
Maybe check the Load Balancer's Idle timeout setting. The default is 60 seconds. https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html?icmpid=docs_elb_console

nginx and uwsgi very large files upload (>3Gb)

maybe someone know what to do. I'm trying to upload files greater than 3Gb. No problems, if I upload files up to 2Gb with next configs:
Nginx:
client_max_body_size 5g;
client_body_in_file_only clean;
client_body_buffer_size 256K;
proxy_read_timeout 1200;
keepalive_timeout 30;
uwsgi_read_timeout 30m;
UWSGI options:
harakiri 60
harakiri 1800
socket-timeout 1800
chunked-input-timeout 1800
http-timeout 1800
When i upload big (almost 4Gb) file, it uploads ~ 2-2.2Gb and stops with error:
[uwsgi-body-read] Timeout reading 4096 bytes. Content-Length: 3763798089 consumed: 2147479552 left: 1616318537
Which params i should use?
What ended up solving my issue was setting:
uwsgi.ini
http-timeout = 1200
socket-timeout = 1200
nginx_site.conf
proxy_read_timeout 1200;
proxy_send_timeout 1200;
client_header_timeout 1200;
client_body_timeout 1200;
uwsgi_read_timeout 20m;
After stumbling upon a similar issue with large files (>1Gb) I collected further info from github issue and stackoverflow thread and several more. What ended up happening was python / uwsgi taking too long to process the large file, and nginx stopped listening to uwsgi leading to a 504 error. So increasing the timeout time for http and socket communication ended up resolving it.
I have similar problems with nginx and uWSGI with the same limit at about 2-2.2GB file size. nginx properly accepts the POST request and when it forwards the request to uWSGI, uWSGI just stops processing the upload after about 18 seconds (Zero CPU, lsof says that the file size in the uWSGI temp dir does not increase anymore). Increasing any timeout values does not help.
What solved the issue for me was to disable buffering in nginx (proxy_request_buffering off;) and setting up buffering in uWSGI with 2MB buffer size:
post-buffering = 2097152
post-buffering-bufsize = 2097152

Why Icecast2 does not want to give the stream through https?

On a server with Ubuntu 14.04 LTS installed Icecast2 2.4.1 with SSL support. Also on this server work HTTPS website.
I want insert on the page HTML5-player that will also take the stream through the SSL (otherwise - mixed content error).
The site has a commercial SSL certificate, Icecast - a self-signed.
Icecast config file:
<icecast>
<location>****</location>
<admin>admin#*************</admin>
<limits>
<clients>1000</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<source-timeout>10</source-timeout>
<burst-on-connect>0</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>*****</source-password>
<relay-password>*****</relay-password>
<admin-user>*****</admin-user>
<admin-password>*****</admin-password>
</authentication>
<hostname>************</hostname>
<listen-socket>
<port>8000</port>
<ssl>1</ssl>
</listen-socket>
<mount>
<mount-name>/stream</mount-name>
<charset>utf-8</charset>
</mount>
<mount>
<mount-name>/ogg</mount-name>
<charset>utf-8</charset>
</mount>
<fileserve>1</fileserve>
<paths>
<basedir>/usr/share/icecast2</basedir>
<logdir>/var/log/icecast2</logdir>
<webroot>/usr/share/icecast2/web</webroot>
<adminroot>/usr/share/icecast2/admin</adminroot>
<alias source="/" dest="/status.xsl"/>
<ssl-certificate>/etc/icecast2/icecast2.pem</ssl-certificate>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>4</loglevel>
</logging>
<security>
<chroot>0</chroot>
<changeowner>
<user>icecast2</user>
<group>icecast</group>
</changeowner>
</security>
</icecast>
Certificate for Icecast (/etc/icecast2/icecast2.pem) generated by:
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout icecast2.pem -out icecast2.pem
I expect to get the output stream from the addresses https://domain.name:8000/stream https://domain.name:8000/ogg for insertion into the player via tag audio, but in response - silence. Thus the addresses with a simple http everything works fine.
I did not understand what all the same mistake...
Thanks in advance for your help!
I ran into this issue recently and didn't have a lot of time to solve it, nor did I see see much documentation for doing so. I assume it's not the most widely used icecast config, so I just proxied mine with nginx and it works fine.
Here's an example nginx vhost. Be sure to change domain, check your paths and think about the location you want the mount proxied to and how you want to handle ports.
Please note this will make your stream available on port 443 instead of 8000. Certain clients (such as facebookexternalhit/1.1) may try to hang onto the stream as thought it's a https url waiting to connect. This may not be the behavior you expect or desire.
Also, if you want no http available at all, be sure to change bind-address back to the local host. eg:
<bind-address>127.0.0.1</bind-address>
www.example.com.nginx.conf
server {
listen 80;
server_name www.example.com;
location /listen {
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
}
#### SSL
server {
ssl on;
ssl_certificate_key /etc/sslmate/www.example.com.key;
ssl_certificate /etc/sslmate/www.example.com.chained.crt;
# Recommended security settings from https://wiki.mozilla.org/Security/Server_Side_TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:
ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA
-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES2
56-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /usr/share/sslmate/dhparams/dh2048-group14.pem;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:5m;
# Enable this if you want HSTS (recommended)
add_header Strict-Transport-Security max-age=15768000;
listen 443 ssl;
server_name www.example.com;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The icecast2 package provided for Debian-based versions doesn't provide SSL support (so it has not https:// support) since it is supported by openssl libraries that have licensing difficulties with the GNU GPL.
To know if icecast2 was compiled with openssl support, run this:
ldd /usr/bin/icecast2 | grep ssl
if it's compiled with it, then a line like this one should de displayed:
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007ff5248a4000)
If instead you see nothing, you have no support for it.
To get the correct version you may want to obtain it from xiph.org directly:
https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)
Guys the issue is related to the certificate file.
First of all, you need to have for example
<paths>
<ssl-certificate>/usr/share/icecast2/icecast.pem</ssl-certificate>
</paths>
and
<listen-socket>
<port>8443</port>
<ssl>1</ssl>
</listen-socket>
in your configuration. But that is not everything you need!
If you get your certificate for example from let's encrypt or sslforfree, you will have a certificate file and a private key file.
But for Icecast, you need both files together.
What you should do:
1- Open the private key and copy the content of this file
2- Open the certificate file and paste the content of your private key that you copied, at the end of this file and save it as icecast.pem.
Then use this file and you should be fine.
Thanks to the person who introduces it here:
Icecast 2 and SSL
In your icecast2.xml file
If set to 1 will enable HTTPS on this listen-socket. Icecast must have been compiled against OpenSSL to be able to do so.
<paths>
<basedir>./</basedir>
<logdir>./logs</logdir>
<pidfile>./icecast.pid</pidfile>
<webroot>./web</webroot>
<adminroot>./admin</adminroot>
<allow-ip>/path/to/ip_allowlist</allow-ip>
<deny-ip>/path_to_ip_denylist</deny-ip>
<tls-certificate>/path/to/certificate.pem</tls-certificate>
<ssl-allowed-ciphers>ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS</ssl-allowed-ciphers>
<alias source="/foo" dest="/bar"/>
</paths>
<listen-socket>
<port>8000</port>
<bind-address>127.0.0.1</bind-address> </listen-socket>
<listen-socket>
<port>8443</port>
<tls>1</tls> </listen-socket>
<listen-socket>
<port>8004</port>
<shoutcast-mount>/live.mp3</shoutcast-mount> </listen-socket>