I've set ssl_early_data on; to my nginx.conf (inside http { }) and according to these commands,
echo -e "HEAD / HTTP/1.1\r\nHost: $host\r\nConnection: close\r\n\r\n" > request.txt
openssl s_client -connect example.tld:443 -tls1_3 -sess_out session.pem -ign_eof < request.txt
openssl s_client -connect example.tld:443 -tls1_3 -sess_in session.pem -early_data request.txt
it does work properly.
According to the nginx documentation (https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_early_data), it is recommended to set proxy_set_header Early-Data $ssl_early_data;.
My question is: Where do I set this? Right after ssl_early_data on;, still inside http { }?
You should pass Early-Data to your application. So you must have something like:
http {
...
# Enabling 0-RTT
ssl_early_data on;
...
server {
...
# Passing it to the upstream
proxy_set_header Early-Data $ssl_early_data;
}
}
Otherwise, you can render you application vulnerable to Replay Attacks: https://blog.trailofbits.com/2019/03/25/what-application-developers-need-to-know-about-tls-early-data-0rtt/
Related
I want to send a POST request to https server and get the response. Here is what I am doing in curl and it works well.
curl --key ./client.key --cert ./client.crt https://test-as.sgx.trustedservices.intel.com:443/attestation/sgx/v2/report -H 'Content-Type: application/json' --data '{"key": "value"}'
This is the code snippet I tried in Go.
url := "https://test-as.sgx.trustedservices.intel.com:443/attestation/sgx/v2/report"
pair, e := tls.LoadX509KeyPair("client.crt", "client.key")
if e != nil {
log.Fatal("LoadX509KeyPair:", e)
}
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
Certificates: []tls.Certificate{pair},
},
}}
resp, e := client.Post(url, "application/json", bytes.NewBufferString(payload))
The program is hanging at the last line, error message is
Post: dial tcp connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
I feel there is problem in my connection establish code, instead of the server's problem since server works perfectly with curl.
Firstly, never ever ever use InsecureSkipVerify: true no matter how convenient it may seem. Instead set something like:
tls.Config {
ServerName: "test-as.sgx.trustedservices.intel.com",
Certificates: []tls.Certificate{pair}
}
Second, initializing http.Transport - to pass your custom tls.Config - also zeros out all the other default http.Transport settings that come with the default http.Client.
Some of those zero defaults may force behavior you might not expect.
See here on how to restore some of those original defaults.
There is a ton of information available about cURL and SSL, but not so much is out there about writing a server. I have a small local server written in PHP that I would like to have TLS/SSL enabled. I am having issues with my server crashing upon secure connections. I am only receiving the error, "PHP Warning: stream_socket_accept(): Failed to enable crypto". I have an identical server running without TLS, and it is working fine. I have an idea it is the certificates, or the connection to/reading the certificates. However, I am not sure if it is an error on how I generated the certificates, how I have them joined to PEM, or something else. Also, for our domains, I've used *.domain.tld in both the code below, as well as the local name in the cert creation.
Furthermore, the certificates shown in the web browser show the 127.0.0.1 cert and not the localhost (or other domains) certificates regardless of the domain requested. Is that because the 127.0.0.1 is set as the local cert? About the certificates- this is my current code for creating the .pem file to use on the server:
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout apache.key -out apache.crt
apache.crt apache.key > joined.pem
A basic rendition of the server code is:
<?php
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
$flags = STREAM_SERVER_BIND|STREAM_SERVER_LISTEN;
$ctx = stream_context_create(['ssl' => [
'local_cert' => "{path}/Websites/127.0.0.1/certs/joined.pem",
'SNI_server_certs' => [
"127.0.0.1" => "{path}/Websites/127.0.0.1/certs/joined.pem",
"localhost" => "{path}//Websites/localhost/certs/joined.pem",
]
]]);
stream_context_set_option($ctx, 'ssl', 'ssl_method', 'STREAM_CRYPTO_METHOD_TLSv23_SERVER');
stream_context_set_option($ctx, 'ssl', 'allow_self_signed', true);
stream_context_set_option($ctx, 'ssl', 'verify_peer', false);
stream_context_set_option($ctx, 'ssl', 'ciphers', "HIGH");
$socket = stream_socket_server("tls://127.0.0.1:8443", $errno, $errstr, $flags, $ctx);
while ( $client = stream_socket_accept($socket, "-1", $clientIP)):
$msg = fread($client, 8192);
$resp = "HTTP/1.1 200 OK\r\nContent-type: text/html\r\n\r\n<h1>Hi, you are secured.<br>{$msg}";
fwrite($client,$resp );
fclose($client);
endwhile;
One more thing, what is the proper cipher to set for appeasing all of the major browsers out there, Chrome seems to play by its own rules.
Any ideas what I am missing here?
My issue was not setting SANs during the certificate creation. The link https://serverfault.com/questions/880804/can-not-get-rid-of-neterr-cert-common-name-invalid-error-in-chrome-with-self corrected my issues.
On a server with Ubuntu 14.04 LTS installed Icecast2 2.4.1 with SSL support. Also on this server work HTTPS website.
I want insert on the page HTML5-player that will also take the stream through the SSL (otherwise - mixed content error).
The site has a commercial SSL certificate, Icecast - a self-signed.
Icecast config file:
<icecast>
<location>****</location>
<admin>admin#*************</admin>
<limits>
<clients>1000</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<source-timeout>10</source-timeout>
<burst-on-connect>0</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>*****</source-password>
<relay-password>*****</relay-password>
<admin-user>*****</admin-user>
<admin-password>*****</admin-password>
</authentication>
<hostname>************</hostname>
<listen-socket>
<port>8000</port>
<ssl>1</ssl>
</listen-socket>
<mount>
<mount-name>/stream</mount-name>
<charset>utf-8</charset>
</mount>
<mount>
<mount-name>/ogg</mount-name>
<charset>utf-8</charset>
</mount>
<fileserve>1</fileserve>
<paths>
<basedir>/usr/share/icecast2</basedir>
<logdir>/var/log/icecast2</logdir>
<webroot>/usr/share/icecast2/web</webroot>
<adminroot>/usr/share/icecast2/admin</adminroot>
<alias source="/" dest="/status.xsl"/>
<ssl-certificate>/etc/icecast2/icecast2.pem</ssl-certificate>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>4</loglevel>
</logging>
<security>
<chroot>0</chroot>
<changeowner>
<user>icecast2</user>
<group>icecast</group>
</changeowner>
</security>
</icecast>
Certificate for Icecast (/etc/icecast2/icecast2.pem) generated by:
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout icecast2.pem -out icecast2.pem
I expect to get the output stream from the addresses https://domain.name:8000/stream https://domain.name:8000/ogg for insertion into the player via tag audio, but in response - silence. Thus the addresses with a simple http everything works fine.
I did not understand what all the same mistake...
Thanks in advance for your help!
I ran into this issue recently and didn't have a lot of time to solve it, nor did I see see much documentation for doing so. I assume it's not the most widely used icecast config, so I just proxied mine with nginx and it works fine.
Here's an example nginx vhost. Be sure to change domain, check your paths and think about the location you want the mount proxied to and how you want to handle ports.
Please note this will make your stream available on port 443 instead of 8000. Certain clients (such as facebookexternalhit/1.1) may try to hang onto the stream as thought it's a https url waiting to connect. This may not be the behavior you expect or desire.
Also, if you want no http available at all, be sure to change bind-address back to the local host. eg:
<bind-address>127.0.0.1</bind-address>
www.example.com.nginx.conf
server {
listen 80;
server_name www.example.com;
location /listen {
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
}
#### SSL
server {
ssl on;
ssl_certificate_key /etc/sslmate/www.example.com.key;
ssl_certificate /etc/sslmate/www.example.com.chained.crt;
# Recommended security settings from https://wiki.mozilla.org/Security/Server_Side_TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:
ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA
-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES2
56-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /usr/share/sslmate/dhparams/dh2048-group14.pem;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:5m;
# Enable this if you want HSTS (recommended)
add_header Strict-Transport-Security max-age=15768000;
listen 443 ssl;
server_name www.example.com;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The icecast2 package provided for Debian-based versions doesn't provide SSL support (so it has not https:// support) since it is supported by openssl libraries that have licensing difficulties with the GNU GPL.
To know if icecast2 was compiled with openssl support, run this:
ldd /usr/bin/icecast2 | grep ssl
if it's compiled with it, then a line like this one should de displayed:
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007ff5248a4000)
If instead you see nothing, you have no support for it.
To get the correct version you may want to obtain it from xiph.org directly:
https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)
Guys the issue is related to the certificate file.
First of all, you need to have for example
<paths>
<ssl-certificate>/usr/share/icecast2/icecast.pem</ssl-certificate>
</paths>
and
<listen-socket>
<port>8443</port>
<ssl>1</ssl>
</listen-socket>
in your configuration. But that is not everything you need!
If you get your certificate for example from let's encrypt or sslforfree, you will have a certificate file and a private key file.
But for Icecast, you need both files together.
What you should do:
1- Open the private key and copy the content of this file
2- Open the certificate file and paste the content of your private key that you copied, at the end of this file and save it as icecast.pem.
Then use this file and you should be fine.
Thanks to the person who introduces it here:
Icecast 2 and SSL
In your icecast2.xml file
If set to 1 will enable HTTPS on this listen-socket. Icecast must have been compiled against OpenSSL to be able to do so.
<paths>
<basedir>./</basedir>
<logdir>./logs</logdir>
<pidfile>./icecast.pid</pidfile>
<webroot>./web</webroot>
<adminroot>./admin</adminroot>
<allow-ip>/path/to/ip_allowlist</allow-ip>
<deny-ip>/path_to_ip_denylist</deny-ip>
<tls-certificate>/path/to/certificate.pem</tls-certificate>
<ssl-allowed-ciphers>ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS</ssl-allowed-ciphers>
<alias source="/foo" dest="/bar"/>
</paths>
<listen-socket>
<port>8000</port>
<bind-address>127.0.0.1</bind-address> </listen-socket>
<listen-socket>
<port>8443</port>
<tls>1</tls> </listen-socket>
<listen-socket>
<port>8004</port>
<shoutcast-mount>/live.mp3</shoutcast-mount> </listen-socket>
How is tcllib's autoproxy supposed to work with tls support? I've read the documentation and taken the following minimal example from it but I just can't get it to make any https connections whatsoever:
#!/usr/bin/tclsh
package require autoproxy
package require http
package require tls
::autoproxy::init
::http::register https 443 [list ::autoproxy::tls_socket -tls1 1]
#::http::register https 443 [list ::tls::socket -tls1 1]
set token [::http::geturl "https://example.com/" -validate 1]
puts [::http::meta $token]
::http::cleanup $token
which results in:
handshake failed: resource temporarily unavailable
while executing
"::http::geturl "https://example.com/" -validate 1"
invoked from within
"set token [::http::geturl "https://example.com/" -validate 1]"
(file "./https.tcl" line 9)
I have no proxy servers defined via the http_proxy envvar and when using ::tls::socket directly it works fine. I'm using tcl 8.6.1, tcllib 1.15, and tls 1.6.
I have setup a server (well... two servers, but I don't think that is too relevant for this question) running Tornado (version 2.4.1) and being proxied by Nginx (version 1.4.4).
I need to periodically upload json (basically text) files to one of them through a POST request. These files would greatly benefit from gzip compression (I get compression ratios of 90% when I compress the files manually) but I don't know how to inflate them in a nice way.
Ideally, Nginx would inflate it and pass it clean an neat to Tornado... but that's not what's happening now, as you'll have probably guessed, otherwise I wouldn't be asking this question :-)
These are the relevant parts of my nginx.conf file (or the parts that I think are relevant, because I'm pretty new to Nginx and Tornado):
user borrajax;
worker_processes 1;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/access.log main;
error_log /tmp/error.log;
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
gzip_disable "msie6";
gzip_types application/json text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/bmp;
gzip_http_version 1.1;
gzip_proxied expired no-cache no-store private auth;
upstream web {
server 127.0.0.1:8000;
}
upstream input {
server 127.0.0.1:8200;
}
server {
listen 80 default_server;
server_name localhost;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://web;
}
}
server {
listen 81 default_server;
server_name input.localhost;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://input;
}
}
}
As I mentioned before, there are two Tornado servers. The main one is running on localhost:8000 for the web pages and that kind of stuff. The one running on localhost:8200 is the one intended to receive those json files) This setup is working fine, except for the Gzip part.
I'd like for Nginx to inflate the gzipped requests that come to localhost:81, and forward them to the Tornado I have running on localhost:8200 (inflated)
With the configuration like this, the data reaches Tornado, but the body is still compressed, and Tornado throws an exception:
[E 140108 15:33:42 input:1085] Uncaught exception POST
/input/log?ts=1389213222 (127.0.0.1)
HTTPRequest(
protocol='http', host='192.168.0.140:81',
method='POST', uri='/input/log?&ts=1389213222',
version='HTTP/1.0', remote_ip='127.0.0.1', body='\x1f\x8b\x08\x00\x00',
headers={'Content-Length': '1325', 'Accept-Encoding': 'deflate, gzip',
'Content-Encoding': 'gzip', 'Host': '192.168.0.140:81', 'Accept': '*/*',
'User-Agent': 'curl/7.23.1 libcurl/7.23.1 OpenSSL/1.0.1c zlib/1.2.7',
'Connection': 'close', 'X-Real-Ip': '192.168.0.94',
'Content-Type': 'application/json'}
)
I understand I can always get the request's body within the post() Tornado handler and inflate it manually, but that just sounds... dirty.
Finally, this is the curl call I use to upload the gzipped file:
curl --max-time 60 --silent --location --insecure \
--write-out "%{http_code}" --request POST \
--compressed \
--header "Content-Encoding:gzip" \
--header "Content-Type:application/json" \
--data-binary "$log_file_path.gz" \
"/input/log?ts=1389216192" \
--output /dev/null \
--trace-ascii "/tmp/curl_trace.log" \
--connect-timeout 30
The file in $log_file_path.gz is generated using gzip $log_file_path (I mean... is a regular Gzip compressed file)
Is this something doable? It sounds like something that should be pretty straight forward, but nopes...
If this is is something not doable through Nginx, an automated method in Tornado would work too (something more reliable and elegant that having me uncompressing files in the middle of a POST request's handler) Like... something like Django middlewares or something like that?
Thank you in advance!!
You're already calling json.loads() somewhere (Tornado doesn't decode json for you so the exception you're seeing (but did not quote) must be coming from your own code); why not just replace that with a method that examines the Content-Encoding and Content-Type headers and decodes appropriately?
I gave up trying to have Nginx or Tornado automatically expanding the body of the POST request, so I went with what Ben Darnell mentioned in his answer. I compress the file using gzip and POST it as a part of a Form (pretty much as if I were uploading a file).
I'm gonna post the bits of code that take care of it, just in case this helps someone else:
In the client (a bash script using curl):
The path (absolute) to the file to send is in the variable f. The variable TMP_DIR points to /tmp/, and SCRIPT_NAME contains the name of the bash script trying to perform the upload (namely uploader.sh)
zip_f_path="$TMP_DIR/$(basename ${f}).gz"
[[ -f "${zip_f_path}" ]] && rm -f "${zip_f_path}" &>/dev/null
gzip -c "$f" 1> "${zip_f_path}"
if [ $? -eq 0 ] && [[ -s "${zip_f_path}" ]]
then
response=$(curl --max-time 60 --silent --location --insecure \
--write-out "%{http_code}" --request POST \
"${url}" \
--output /dev/null \
--trace-ascii "${TMP_DIR}/${SCRIPT_NAME}_trace.log" \
--connect-timeout 30 \
--form "data=#${zip_f_path};type=application/x-gzip")
else
echo "Attempt to compress $f into $zip_f_path failed"
fi
In the server (in the Tornado handler):
try:
content_type = self.request.files['data'][0]['content_type']
if content_type == 'application/x-gzip':
gzip_decompressor = GzipDecompressor()
file_body = gzip_decompressor.decompress(
self.request.files['data'][0]['body'])
file_body += gzip_decompressor.flush()
else:
file_body = self.request.files['data'][0]['body']
except:
self.send_error(400)
logging.error('Failed to interpret data: %s',
self.request.files['data'])
return