On a server with Ubuntu 14.04 LTS installed Icecast2 2.4.1 with SSL support. Also on this server work HTTPS website.
I want insert on the page HTML5-player that will also take the stream through the SSL (otherwise - mixed content error).
The site has a commercial SSL certificate, Icecast - a self-signed.
Icecast config file:
<icecast>
<location>****</location>
<admin>admin#*************</admin>
<limits>
<clients>1000</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<source-timeout>10</source-timeout>
<burst-on-connect>0</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>*****</source-password>
<relay-password>*****</relay-password>
<admin-user>*****</admin-user>
<admin-password>*****</admin-password>
</authentication>
<hostname>************</hostname>
<listen-socket>
<port>8000</port>
<ssl>1</ssl>
</listen-socket>
<mount>
<mount-name>/stream</mount-name>
<charset>utf-8</charset>
</mount>
<mount>
<mount-name>/ogg</mount-name>
<charset>utf-8</charset>
</mount>
<fileserve>1</fileserve>
<paths>
<basedir>/usr/share/icecast2</basedir>
<logdir>/var/log/icecast2</logdir>
<webroot>/usr/share/icecast2/web</webroot>
<adminroot>/usr/share/icecast2/admin</adminroot>
<alias source="/" dest="/status.xsl"/>
<ssl-certificate>/etc/icecast2/icecast2.pem</ssl-certificate>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>4</loglevel>
</logging>
<security>
<chroot>0</chroot>
<changeowner>
<user>icecast2</user>
<group>icecast</group>
</changeowner>
</security>
</icecast>
Certificate for Icecast (/etc/icecast2/icecast2.pem) generated by:
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout icecast2.pem -out icecast2.pem
I expect to get the output stream from the addresses https://domain.name:8000/stream https://domain.name:8000/ogg for insertion into the player via tag audio, but in response - silence. Thus the addresses with a simple http everything works fine.
I did not understand what all the same mistake...
Thanks in advance for your help!
I ran into this issue recently and didn't have a lot of time to solve it, nor did I see see much documentation for doing so. I assume it's not the most widely used icecast config, so I just proxied mine with nginx and it works fine.
Here's an example nginx vhost. Be sure to change domain, check your paths and think about the location you want the mount proxied to and how you want to handle ports.
Please note this will make your stream available on port 443 instead of 8000. Certain clients (such as facebookexternalhit/1.1) may try to hang onto the stream as thought it's a https url waiting to connect. This may not be the behavior you expect or desire.
Also, if you want no http available at all, be sure to change bind-address back to the local host. eg:
<bind-address>127.0.0.1</bind-address>
www.example.com.nginx.conf
server {
listen 80;
server_name www.example.com;
location /listen {
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
}
#### SSL
server {
ssl on;
ssl_certificate_key /etc/sslmate/www.example.com.key;
ssl_certificate /etc/sslmate/www.example.com.chained.crt;
# Recommended security settings from https://wiki.mozilla.org/Security/Server_Side_TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:
ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA
-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES2
56-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /usr/share/sslmate/dhparams/dh2048-group14.pem;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:5m;
# Enable this if you want HSTS (recommended)
add_header Strict-Transport-Security max-age=15768000;
listen 443 ssl;
server_name www.example.com;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The icecast2 package provided for Debian-based versions doesn't provide SSL support (so it has not https:// support) since it is supported by openssl libraries that have licensing difficulties with the GNU GPL.
To know if icecast2 was compiled with openssl support, run this:
ldd /usr/bin/icecast2 | grep ssl
if it's compiled with it, then a line like this one should de displayed:
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007ff5248a4000)
If instead you see nothing, you have no support for it.
To get the correct version you may want to obtain it from xiph.org directly:
https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)
Guys the issue is related to the certificate file.
First of all, you need to have for example
<paths>
<ssl-certificate>/usr/share/icecast2/icecast.pem</ssl-certificate>
</paths>
and
<listen-socket>
<port>8443</port>
<ssl>1</ssl>
</listen-socket>
in your configuration. But that is not everything you need!
If you get your certificate for example from let's encrypt or sslforfree, you will have a certificate file and a private key file.
But for Icecast, you need both files together.
What you should do:
1- Open the private key and copy the content of this file
2- Open the certificate file and paste the content of your private key that you copied, at the end of this file and save it as icecast.pem.
Then use this file and you should be fine.
Thanks to the person who introduces it here:
Icecast 2 and SSL
In your icecast2.xml file
If set to 1 will enable HTTPS on this listen-socket. Icecast must have been compiled against OpenSSL to be able to do so.
<paths>
<basedir>./</basedir>
<logdir>./logs</logdir>
<pidfile>./icecast.pid</pidfile>
<webroot>./web</webroot>
<adminroot>./admin</adminroot>
<allow-ip>/path/to/ip_allowlist</allow-ip>
<deny-ip>/path_to_ip_denylist</deny-ip>
<tls-certificate>/path/to/certificate.pem</tls-certificate>
<ssl-allowed-ciphers>ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS</ssl-allowed-ciphers>
<alias source="/foo" dest="/bar"/>
</paths>
<listen-socket>
<port>8000</port>
<bind-address>127.0.0.1</bind-address> </listen-socket>
<listen-socket>
<port>8443</port>
<tls>1</tls> </listen-socket>
<listen-socket>
<port>8004</port>
<shoutcast-mount>/live.mp3</shoutcast-mount> </listen-socket>
Related
I've set ssl_early_data on; to my nginx.conf (inside http { }) and according to these commands,
echo -e "HEAD / HTTP/1.1\r\nHost: $host\r\nConnection: close\r\n\r\n" > request.txt
openssl s_client -connect example.tld:443 -tls1_3 -sess_out session.pem -ign_eof < request.txt
openssl s_client -connect example.tld:443 -tls1_3 -sess_in session.pem -early_data request.txt
it does work properly.
According to the nginx documentation (https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_early_data), it is recommended to set proxy_set_header Early-Data $ssl_early_data;.
My question is: Where do I set this? Right after ssl_early_data on;, still inside http { }?
You should pass Early-Data to your application. So you must have something like:
http {
...
# Enabling 0-RTT
ssl_early_data on;
...
server {
...
# Passing it to the upstream
proxy_set_header Early-Data $ssl_early_data;
}
}
Otherwise, you can render you application vulnerable to Replay Attacks: https://blog.trailofbits.com/2019/03/25/what-application-developers-need-to-know-about-tls-early-data-0rtt/
I would like to know if it is possible to use the OpenResty OIDC module as an authentication proxy within an NGINX stream configuration.
(I don't have acccess to NGINX Plus unfortunately)
I have used NGINX with Stream configurations in the past to proxy access to upstream tcp resources and it works like a charm.
I am currently looking at implementing an OIDC proxy in front of various resources, both static html and dynamic apps, because we have an in-house OIDC IDAM provider. I came across OpenResty, and in particular the lua-resty-oidc module, and thanks to some wonderful guides, (https://medium.com/#technospace/nginx-as-an-openid-connect-rp-with-wso2-identity-server-part-1-b9a63f9bef0a , https://developers.redhat.com/blog/2018/10/08/configuring-nginx-keycloak-oauth-oidc/ ), I got this working in no time for static pages, using an http server nginx config.
I can't get it working for stream configurations though. It looks like the stream module is enabled as standard for OpenResty, but from digging around I don't think the 'access_by_lua_block' function is allowed in the stream context.
This may simply not be supported, which is fair enough when begging off other people's great work, but I wondered if there was any intention to include suport within OpenResty / lua-resty-oidc in the future, or whether anyone knew of a good workaround.
This was my naive attempt to get it working but the server complains about the
'access_by_lua_block' command at run time.
2019/08/22 08:20:44 [emerg] 1#1: "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
nginx: [emerg] "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
events {
worker_connections 1024;
}
stream {
lua_package_path "/usr/local/openresty/?.lua;;";
resolver 168.63.129.16;
lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
lua_ssl_verify_depth 5;
# cache for discovery metadata documents
lua_shared_dict discovery 1m;
# cache for JWKs
lua_shared_dict jwks 1m;
upstream geyser {
server geyser-api.com:3838;
}
server {
listen 443 ssl;
ssl_certificate /usr/local/openresty/nginx/ssl/nginx.crt;
ssl_certificate_key /usr/local/openresty/nginx/ssl/nginx.key;
access_by_lua_block {
local opts = {
redirect_uri_path = "/redirect_uri",
discovery = "https://oidc.provider/discovery",
client_id = "XXXXXXXXXXX",
client_secret = "XXXXXXXXXXX",
ssl_verify = "no",
scope = "openid",
redirect_uri_scheme = "https",
}
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.req.set_header("X-USER", res.id_token.sub)
}
proxy_pass geyser;
}
}
Anyone have any advice?
i don't think that's possible.
However to be sure, you should try creating an issue on the official github
https://github.com/zmartzone/lua-resty-openidc/issues
They helped me solve a similar issue before
I would like to know the correct way of configuring the SSL protocol on wildfly.
On looking at examples, I found two different ways of doing so. I want to know which one is the proper way of doing it -
Adding it in the protocol section as below:
<security-realm name="sslRealm">
<server-identities>
<ssl protocol="TLSv1.2">
Or adding it in the https listener as below :
<https-listener name="https" socket-binding="https" security-
realm="sslRealm" enabled-protocols="TLSv1.2"/>
I'm using wildfly-8.2.0.Final.
Configuration options shown here apply also to Wildfly 9 and 10
The correct way is using both of them. They are intimately related, see below how.
<https-listener ..>
The Wildfly Undertow subsystem support enabled-protocols attribute, which is a comma separated list of protocols to be supported. For example:
enabled-protocols="TLSv1.1,TLSv1.2"
With just TLSv1.2, many vulnerabilities are plugged. However, by default, Wildfly support all versions of TLS (v1.0, v1.1 and v1.2) even though versions below 1.2 are considered weak.
<server-identities />
Here, basically, you can choose one of the previously enabled protocols.
<security-realm name="sslRealm">
<server-identities>
<ssl protocol="TLSv1.2">
The protocol attribute by default is set to TLS and in general does not need to be set.
Note that without any change in the default configuration, you get a https server that supports TLSv1.0, TLSv1.1 and TLSv1.2.
For checking the effects of those configurations, use this:
nmap --script ssl-enum-ciphers -p 8443 <your wildfly IP>
How is tcllib's autoproxy supposed to work with tls support? I've read the documentation and taken the following minimal example from it but I just can't get it to make any https connections whatsoever:
#!/usr/bin/tclsh
package require autoproxy
package require http
package require tls
::autoproxy::init
::http::register https 443 [list ::autoproxy::tls_socket -tls1 1]
#::http::register https 443 [list ::tls::socket -tls1 1]
set token [::http::geturl "https://example.com/" -validate 1]
puts [::http::meta $token]
::http::cleanup $token
which results in:
handshake failed: resource temporarily unavailable
while executing
"::http::geturl "https://example.com/" -validate 1"
invoked from within
"set token [::http::geturl "https://example.com/" -validate 1]"
(file "./https.tcl" line 9)
I have no proxy servers defined via the http_proxy envvar and when using ::tls::socket directly it works fine. I'm using tcl 8.6.1, tcllib 1.15, and tls 1.6.
I cannot use wss:// in my simple WebSocket app created with Play!Framework 2.2. It echoes the message back. The endpoint is like this
def indexWS2 = WebSocket.using[String] {
request => {
println("got connection to indexWS2")
var channel: Option[Concurrent.Channel[String]] = None
val outEnumerator: Enumerator[String] = Concurrent.unicast(c => channel = Some(c))
// Log events to the console
val myIteratee: Iteratee[String, Unit] = Iteratee.foreach[String] {gotString => {
println("received: " + gotString)
// send string back
channel.foreach(_.push("echoing back \"" + gotString + "\""))
}}
(myIteratee, outEnumerator)
}
}
and the route is described as
GET /ws2 controllers.Application.indexWS2
I create a connection from a JS client like this
myWebSocket = new WebSocket("ws://localhost:9000/ws2");
and everything works fine. But if I change ws:// into wss:// in order to use TLS, it fails and I get the following Netty exception:
[error] p.nettyException - Exception caught in Netty
java.lang.IllegalArgumentException: empty text
How can I make this work? Thanks.
I really wanted to figure this out for you! But I didn't like the answer. It appears there's no Play support yet for SSL for websockets. Saw mention of it here and no sign of progress since:
http://grokbase.com/t/gg/play-framework/12cd53wst9/2-1-https-and-wss-secure-websocket-clarifications-and-documentation
However, there's hope! You can use nginx as a secure websocket (wss) endpoint, to forward to a internal play app with a insecure websocket endpoint:
The page http://siriux.net/2013/06/nginx-and-websockets/ provided this explanation and sample proxy config for nginx:
Goal: WSS SSL Endpoint: forwards wss|https://ws.example.com to ws|http://ws1.example.com:10080
"The proxy is also an SSL endpoint for WSS and HTTPS connections. So the clients can use wss:// connections (e.g. from pages served via HTTPS) which work better with broken proxy servers, etc."
server {
listen 443;
server_name ws.example.com;
ssl on;
ssl_certificate ws.example.com.bundle.crt;
ssl_certificate_key ws.example.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
# like above
}
}
Nginx is so lightweight and fun. Would not hesitate to go with this option.
Did you try enabling https support on the Play server? It looks like you're trying to connect to the http port using wss, that can never work, you need to enable https, and then change the URL not just to wss, but also to use the https port.
To start a Play server with ssl turned on:
activator run -Dhttps.port=9443
Then connect to wss://localhost:9443/ws2.
wss works fine with Play 2.6.
Instead of hardcode the websocket url, you can get the url via routes:
#import play.api.mvc.RequestHeader
#import controllers.routes
#()(implicit request: RequestHeader)
<!DOCTYPE html>
<html lang="en">
<head>
<title>...</title>
<script>
var wsUri = "#routes.MyController.indexWS2().webSocketURL(secure = true)";
var webSocket = new WebSocket(wsUri);
//...
</script>
</head>
<body>
...
</body>
</html>
Another option is to use SockJS as the Websocket layer, SockJS implementation for Play2 can be found at
https://github.com/fdimuccio/play2-sockjs
When HTTPS is enabled an wss endpoind is created by SockJS over the HTTPS channel. Play2-sockjs also supports the Actor pattern as with native Play websockets.
If you don't want to use SockJS in the clientside but rather force browser websocket implementation, you can use explicit websocket endpoint wss:////websocket