I followed the steps in http://nil.uniza.sk/sip/installing-and-configuring-restund-stunturn-server to set up a restund server.
When I test it by stun, the result is
STUN client version 0.96
Primary: Blocked or could not reach STUN server
Return value is 0x00001c
Attached the config file
#
# restund.conf
#
# core
daemon yes
debug yes
realm myrealm
syncinterval 600
udp_listen 192.168.1.25:3478
#udp_listen 1.2.3.4:3478
udp_sockbuf_size 524288
tcp_listen 192.168.1.25:3478
#tcp_listen 1.2.3.4:3478
# modules
module_path /usr/local/lib/restund/modules
module stat.so
module binding.so
#module auth.so
module turn.so
#module mysql_ser.so
module syslog.so
module status.so
# auth
auth_nonce_expiry 3600
# turn
turn_max_allocations 512
turn_max_lifetime 600
turn_relay_addr 192.168.1.25
#turn_relay_addr6 ::1
# mysql
#mysql_host localhost
#mysql_user ser
#mysql_pass heslo
#mysql_db ser
#mysql_ser 0
# syslog
syslog_facility 24
# status
#status_udp_addr 127.0.0.1
#status_udp_port 33000
status_http_addr 192.168.1.25
status_http_port 8080
Any suggestion? Thanks in advance.
It looks like you are hosting your STUN/TURN server from behind a NAT. I'm guessing that from the looks of your conf file that lists 192.168.1.25 as the listening port. If your client is from the outside of your NAT, make sure you have the proper port forwarding setup.
Otherwise, the most common case for this problem is that the host server has a firewall rule that blocks incoming traffic by default. Check your firewall settings (iptables) on the host box as appropriate.
Related
I've set up my turn server, and tested on IceTricklePage.
The non-secure port 3478 works just fine, I can gather a candidate with type "relay".
But with the secure port (TLS) 5349, it always failed.
The server turn:xx.xx.xx.xx:5349?transport=tcp returned an error with code=701.
Below is my turnserver.conf file:
# /etc/turnserver.conf
# STUN server port is 3478 for UDP and TCP, and 5349 for TLS.
# Allow connection on the UDP port 3478
listening-port=3478
# and 5349 for TLS (secure)
tls-listening-port=5349
external-ip= xx.xx.xx.xx
listening-ip=0.0.0.0
allow-loopback-peers
no-multicast-peers
min-port = 49152
max-port = 49365
verbose
# Require authentication
fingerprint
lt-cred-mech
# We will use the longterm authentication mechanism, but if
# you want to use the auth-secret mechanism, comment lt-cred-mech and
# uncomment use-auth-secret
# Check: https://github.com/coturn/coturn/issues/180#issuecomment-364363272
#The static auth secret needs to be changed, in this tutorial
# we'll generate a token using OpenSSL
#use-auth-secret
# static-auth-secret=replace-this-secret
# ----
# If you decide to use use-auth-secret, After saving the changes, change the auth-secret using the following command:
# sed -i "s/replace-this-secret/$(openssl rand -hex 32)/" /etc/turnserver.conf
# This will replace the replace-this-secret text on the file with the generated token using openssl.
# Specify the server name and the realm that will be used
# if is your first time configuring, just use the domain as name
server-name=turn.mydomain.com
realm=turn.mydomain.com
#
# Important:
# Create a test user if you want
# You can remove this user after testing
user=user:password
total-quota=100
stale-nonce=600
# Path to the SSL certificate and private key. In this example we will use
# the letsencrypt generated certificate files.
cert=/etc/coturn/turn_cert/turn.mydomain.com/cert.pem
pkey=/etc/coturn/turn_cert/turn.mydomain.com/privkey.pem
# Specify the allowed OpenSSL cipher list for TLS/DTLS connections
cipher-list="ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384"
# Specify the process user and group
proc-user=turnserver
proc-group=turnserver
# Log file path
log-file=/var/log/turnserver.log
simple-log
#syslog
I also config my router to port-redirect any transport to public xx.xx.xx.xx:5349 to the internal server running TURN server (similar to the way I did with 3478).
Below is the config:
Anyone get an idea how to fix this? Thanks
Try telnet xx.xx.xx.xx 5349 to see whether connecting successfully or not. If you got telnet: Unable to connect to remote host: Connection refused then your network config is incorrect and need to be changed. Here is something related
I am quite new to HAProxy and want to achieve the following setup / packet flow:
Client -> HAProxy (as reverse proxy) -> Forwarding Proxy (HAProxy, IIS, Squid...) -> Internet -> server.example.com
I would like to have encrypted connections with TLS/SSL from the Client -> HAProxy and from HAProxy -> server.example.com
This means, that the forwarding proxy needs to support the HTTP CONNECT method, to establish a TCP tunnel and transmits packets without trying to interpret them. Over this TCP port I should be able to send bytes to server.example.com - TLS/SSL encrypted, so HTTPS.
Furthermore it could be, that authentication against the forwarding proxy is needed e.g. HTTP Basic authentication.
The software stack of my test setup is as follows:
Client Firefox 76.0.1
HA-Proxy version 1.6.3 2015/12/25
Squid Cache: Version 3.5.27
I have setup this HAProxy configuration:
global
# Standard settings
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
# Tuning
maxconn 2000
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
ssl-default-bind-options no-sslv3
# Ensure a secure enough DH paramset
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
# Redispatch 503 errors on backends, reducing the number of 503 reaching clients
option redispatch
# We want to stall clients for as long as possible, before giving
# up with 503:
timeout connect 5m
# Clients must be acceptably responsive
timeout client 1m
# Server not as much...
timeout server 5m
# HTTPS server
frontend https-in
bind :443 ssl crt-ignore-err all crt /etc/haproxy/ssl/certkey.pem
# Don't serve HTTP directly, but redirect to same URL in https
redirect scheme https code 301 if !{ ssl_fc }
default_backend backend-proxy
backend backend-proxy
# Create the Authorization / Proxy-Authorization header value
# echo -n "user:password" | base64
http-request add-header Proxy-Authorization "Basic dXNlcjpwYXNzd29yZA=="
# We need to use the CONNECT method
http-request set-method CONNECT
# The proxyserver needs to know the full server name
http-request set-path server.example.com:443
server proxy 192.168.1.1:8080
In my test setup I use a Squid server as a forwarding proxy with the following configuration:
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/squid-passwd
acl basic_client proxy_auth REQUIRED
http_access deny !basic_client
http_access allow basic_client
http_access deny all
http_port 8080
coredump_dir /var/spool/squid
Using the Squid forwarding proxy in a regular browser on the same subnet including authentication works fine.
So when the first request comes in from the Client at the HAProxy it gets forwarded via the backend backend-proxy to the Forward Proxy (Squid). The CONNECT succeeds, as I see in the Squid log.
1591166403.966 60146 192.168.1.10 TCP_TUNNEL/200 39 CONNECT server.example.com:443 test HIER_DIRECT/6.7.8.9 -
(IP addresses were replaced with generic values)
The HAProxy log shows, that the correct backend is used:
Jun 3 08:39:57 localhost haproxy[3547]: 192.168.1.20:39398 [03/Jun/2020:08:38:56.855] https-in~ backend-proxy/proxy 154/0/13/209/60375 200 39 - - ---- 0/0/0/0/0 0/0 "GET /someurl HTTP/1.1"
(IP addresses were replaced with generic values)
So far so good. But I am unable to establish a successful communication to server.example.com from my Client. I think I have to use a second/other backend, which will not mangle the requests any more (exchange method and path) but instead use the given TCP port from the forwarding proxy to transmit the request.
How can I save the 'state' of the communication to my backend / proxy server in HAProxy, so the request could be resend to another backend?
How to extract and use the TCP port from the response of the Forwarding Proxy?
Is there a way to check, if the TCP tunnel on the Forwarding Proxy is still opened or do I need to request it using CONNECT every time before I want to use it?
EDIT:
I solved the situation by using stunnel as an intermediary to handle the TCP tunnel creation with CONNECT against the Forwarding Proxy.
If you have an upstream HTTP proxy (like squid) (not a socks proxy) and you want to have haproxy accept connections and open the tunnel thru the upstream proxy (such that the haproxy clients do not support doing so themselves via http/CONNECT method) on behalf of the clients, then this functionality does not exist in haproxy today.
I crated a branch that does this via server keyword proxy-tunnel.
The below example config will behave such that clients of haproxy that connect to port 20025 (haproxy) will result in haproxy establishing a http/CONNECT tunnel via the upstream proxy 172.16.0.99:50443 to 172.16.0.2:2023:
listen SMTP-20025
bind 0.0.0.0:20025
server TEST_SERVERVIA_PROXY 172.16.0.2:2023 proxy-tunnel 172.16.0.99:50443
I need to setup a load balancer for all our applications.
At the moment all our applications are clustered (2-node appservers, and 1 apache on each node as well) and we do not have a LB so we just point our DNS alias to the first webserver of each node, making the second node useless (have to manually do a DNS switch in case of a failure of node1, and we don't have load balanced https queries).
Each application uses SSL with a specific domain & SSL certificate. we cannot accept to decrypt SSL and send unencrypted traffic to the backends as the LB might be located in another country etc. so we need to use passthrough.
Before anything, i just wanted to know if this is actually possible in HAProxy or not ?
I am talking about ~50 different applications. Our LB configuration would have to be HA so i guess we'll use something like keepalived with a shared VIP for HAProxy itself.
The setup would look like this i suppose :
domain-a.com-' '-> backend_dom_a -> 1.1.1.1 (app node1 dom a)
| | 1.1.1.2 (app node2 dom a)
domain-b.com-' '-> backend_dom_b -> 2.1.1.1 (app node1 dom b)
| | 2.1.1.2 (app node2 dom b)
domain-c.com-' '-> backend_dom_c -> 3.1.1.1 (app node1 dom c)
| | 3.1.1.2 (app node2 dom c)
domain-N.com-' '-> backend_dom_N -> 4.1.1.1 (app node1 dom N)
| | 4.1.1.2 (app node2 dom N)
+-> haproxy -+
Thanks for your support, best regards
FYI i'm using this configuration that works like a charm.
i have replaced the values in the files to hide our domains & hostnames, and limited the numbers of urls/backends but we have about 50 running now with the load balancer forwarding requests to many apache servers (and each apache forwards requests to tomcat servers behind)
feel free if you have any question
we use balance source to ensure session stickyness
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
daemon
user haproxy
group haproxy
log /dev/log local6 notice
log /dev/log local5 info
maxconn 50000
#chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
option tcplog
log global
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#---------------------------------------------------------------------
# dedicated stats page
#---------------------------------------------------------------------
listen stats
mode http
bind :22222
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth <mylogin>:<mypass>
stats refresh 30s
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main_https_listen
bind <ip address>:443
mode tcp
option tcplog
log global
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
#---------------------------------------------------------------------
# Common HAProxy nodes configuration
#---------------------------------------------------------------------
# -------------------------------
# ACLs
# -------------------------------
acl acl_SIT_AT35073 req.ssl_sni -i <app_url1>.my.domain.net # SIT_AT35073 is just an internal code we use, but you can use any alias
acl acl_SIT_AT34305 req.ssl_sni -i <app_url2>.my.domain.net
acl acl_SIT_AT28548 req.ssl_sni -i <app_urlN>.my.domain.net
# -------------------------------
# Conditions
# -------------------------------
use_backend backend_SIT_AT35073 if acl_SIT_AT35073 # same here
use_backend backend_SIT_AT34305 if acl_SIT_AT34305
use_backend backend_SIT_AT28548 if acl_SIT_AT28548
#---------------------------------------------------------------------
# Backends
#---------------------------------------------------------------------
# APP 1
backend backend_SIT_AT35073
description APPNAME1
mode tcp
balance source
option ssl-hello-chk
server server_SIT_AT35073_1 <apache_server1>.my.domain.net:443 check
server server_SIT_AT35073_2 <apache_server2>.my.domain.net:443 check
# APP 2
backend backend_SIT_AT34305
description APPNAME2
mode tcp
balance source
option ssl-hello-chk
server server_SIT_AT34305_1 <apache_server3>.my.domain.net:443 check
server server_SIT_AT34305_2 <apache_server4>.my.domain.net:443 check
# APP N
backend backend_SIT_AT28548
description APPNAMEN
mode tcp
balance source
option ssl-hello-chk
server server_SIT_AT28548_1 <apache_server5>.my.domain.net:443 check
server server_SIT_AT28548_2 <apache_server6>.my.domain.net:443 check
I think you have two options:
pass the traffic through to the backend by using the TCP mode in haproxy frontend and backend. This has the benefit that your backend SSL certificate is passed through. Though you lose the possibility to have one SSL termination in your site. So I present you
Have one (usual) SSL certificate, acting as termination for your site and enable SSL between your backend and haproxy instance. This gives you the advantage that you still have only one entry point but different backends with unique certificates.
The second option might look like this:
frontend f_foo
bind :443 ssl crt /path/to/bundle
mode http
log global
use_backend b2_foo
backend be_foo
mode http
timeout connect 5s
server FOO address:port ssl check crt /path/to/client/bundle force-tlsv10 verify none
The drawback is that you need a client certificate for each backend server but that should be easily automatable.
AS more of an update answer for multi domain configs I use the below for routing different domains.
in the frontend is where you bind the port and add the certs which multiple have to be on the same line afaik.
frontend https_in
bind *:443 ssl crt /link/to/cert+key-file.pem crt /link/to/cert+key-file.pem
The acl host is where you specify the domain name and which backend to use based on that domain name.
acl host_example.com hdr(host) -i example.com
use_backend BACKEND_NAME if host_example.com
The backend where you specify the server that domain is running on.
backend BACKEND_NAME
mode http
option httpclose
option forwardfor
cookie JSESSIONID prefix
server server-name server-ip:443 check ssl verify none
How to run redis server with IPV6 interface?
If the redis.conf is edited to bind IPV6, then following error is thrown.
Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind fe80::f816:3eff:fe09:7895
Error message is-
826:M 09 Jul 08:16:46.355 # Creating Server TCP listening socket fe80::f816:3eff:fe09:7895:6379: bind: Invalid argument
Redis Server version - 4.0.6
Ubuntu 16.04 / ROS v1.3.0
I am attempting to configure my ROS to use secure SSL connections.
If I do not make any changes to the configuration.yml - ROS is fine. I can sync and use the dashboard as I would expect.
I have obtained an SSL cert from Letsencrypt. I used the CertBot in standalone mode so that I did not have to install or configure Nginx. (My preference is to not install yet another tech/layer - keep it clean!)
I have the following certificates/key stored in this folder:
/etc/letsencrypt/live/data.mydomain.net/cert.pem
/etc/letsencrypt/live/data.mydomain.net/chain.pem
/etc/letsencrypt/live/data.mydomain.net/fullchain.pem
/etc/letsencrypt/live/data.mydomain.net/privkey.pem
As soon as I enable HTTPS in the configuration.yml I am unable to launch ROS.
There are no error messages written to:
/var/log/realm-object-server.log
Here is a copy of the proxy section of configuration.yml.
http:
## Whether or not to enable the HTTP proxy module. It enables multiplexing requests
## by forwarding incoming requests on a single port to all services.
# enable: true
## The address/interface on which the HTTP proxy module should listen. This defaults
## to 127.0.0.1. If you wish to listen on all available interfaces,
## uncomment the following line.
# listen_address: '::'
## The port that the HTTP proxy module should bind to.
# listen_port: 9080
https:
## Whether or not to enable the HTTPS proxy module. It enables multiplexing requests
## by forwarding incoming requests on a single port to all services.
## Note that even if it enabled, the HTTPS proxy will only start if supplied
## with a valid pair of certificates through certificate_path and private_key_path below.
enable: true
## The path to the certificate and private keys (in PEM format) that will be used
## to set up the HTTPS server accepting connections.
## These configuration options are MANDATORY to start the HTTPS proxy module.
certificate_path: '/etc/letsencrypt/live/data.mydomain.net/fullchain.pem'
private_key_path: '/etc/letsencrypt/live/data.mydomain.net/privkey.pem'
## The address/interface on which the HTTPS proxy module should listen. This defaults
## to 127.0.0.1. If you wish to listen on all available interfaces,
## uncomment the following line.
# listen_address: '::'
## The port that the HTTPS proxy module should bind to.
listen_port: 9443
As I mention. The issue appears to be that as soon as I configure HTTPS the ROS server fails to start. If I disable the HTTPS then the ROS server starts without issue.
The reason I believe ROS is failing to start is - if I attempt curl 127.0.0.1:9080 or curl 127.0.0.1:9443 from the terminal I get the message curl: (7) Failed to connect to 127.0.0.1 port 9443: Connection refused
I'd love to hear your ideas/thoughts/suggestions on how I can get this to work. Cheers. Ian
Thanks to user #Radu - the answer was Permissions.
The realm user did not have permission to read the .pem files.
I picked up the answer from this answer.
Https Proxy for Realm Object Server not working
#Radu - is the man!