HAProxy not forwarding requests properly - reverse-proxy

I have this HAProxy config file:
frontend main
bind *:80
use_backend drewgrosscom if { hdr(host) -i drewgross.com }
use_backend drewgrosscom if { hdr(host) -i www.drewgross.com }
backend drewgrosscom
server app1 127.0.0.1:8000 check inter 5000 rise 1 fall 1
But I'm getting "no data received" on both drewgross.com and www.drewgross.com. Accessing www.drewgross.com:8000 and drewgross.com:8000 both work fine though. Any ideas what is going on?

You need to set mode http. This should work:
defaults
mode http
frontend main
bind *:80
use_backend drewgrosscom if { hdr(host) -i drewgross.com }
use_backend drewgrosscom if { hdr(host) -i www.drewgross.com }
backend drewgrosscom
server app1 127.0.0.1:8000 check inter 5000 rise 1 fall 1

Related

How to expose RSK node to an external network?

I am having problems exposing my RSK node to an external IP.
My startup command looks as follows:
java \
-cp $HOME/Downloads/rskj-core-3.0.1-IRIS-all.jar \
-Drsk.conf.file=/root/bitcoind-lnd/rsk/rsk.conf \
-Drpc.providers.web.cors=* \
-Drpc.providers.web.ws.enabled=true \
co.rsk.Start \
--regtest
This is my rsk.conf:
rpc {
providers {
web {
cors: "*",
http {
enabled = true
bind_address = "0.0.0.0"
hosts = ["localhost", "0.0.0.0"]
port: 4444
}
}
}
}
API is accessible from localhost, but from external network I get error 400. How do I expose it to external network?
You should add your external IP to hosts. Adding just 0.0.0.0 is not enough to indicate all IPs to be valid. Port forwarding needs to be enabled for the port number that you have configured in rsk.conf, which in this case is the default value of 4444.
rpc {
providers {
web {
cors: “*”,
http {
enabled = true
bind_address = “0.0.0.0"
hosts = [“localhost”, “0.0.0.0", “216.58.208.100”]
port: 4444
}
}
}
}
where 216.58.208.100 is your external IP

Haproxy (v2.0.3) with Keepalived (v2.0.7) on CentOS 7.5 returns err empty response for selected apps

We are running haproxy on two non-production servers balanced by keepalived to manage failover.
We recently upgraded from haproxy 1.5 to 2.0.3. In our non-production environment, we never had a HA solution, so we decided to run keepalived to detect haproxy failure/stoppage and apply the VIPs to the backup server.
When we applied these updates, everything worked pretty well...until we noticed something in the addition of new sites into the lb. When keepalived is restarted (not reloaded) and with the new sites behind the lb the new sites seem to work well for an indeterminate amount time...then they start to return "err_empty_response". Nothing seems to fix this, until keepalived is restarted, then they work again for an indeterminate amount of time and than they will start returning "err_empty_response".
The site is still marked up in the stats page.
The painful part is that the calls stop making it into the haproxy.log file which leads me to think that the problem is not (just) haproxy.
What we have tried:
Splitting up each environment into its own virtual interface in keepalived.conf
Updating the binding of the api on the backend server to a working api (to eliminate api code as being an option)
Creating a new binding with a shortened url
Decreasing timeouts (client, server)
keepalived.conf:
`! Configuration File for keepalived
global_defs {
notification_email {
test#blah.com
}
notification_email_from keepalived#blah.com
smtp_server blah.mail.protection.outlook.com.
smtp_connect_timeout 30
router_id LVS_NONPROD
}
# Script used to check if HAProxy is running
vrrp_script check_haproxy {
script "pidof haproxy"
interval 2
weight 2
}
vrrp_instance VI_DEV {
state MASTER
interface ens160
virtual_router_id 52
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
}
track_script {
check_haproxy
}
}
vrrp_instance VI_TEST {
state MASTER
interface ens160
virtual_router_id 53
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
}
track_script {
check_haproxy
}
}
vrrp_instance VI_UAT {
state MASTER
interface ens160
virtual_router_id 54
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
}
track_script {
check_haproxy
}
}
vrrp_instance VI_STAGING {
state MASTER
interface ens160
virtual_router_id 55
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
}
track_script {
check_haproxy
}
}
vrrp_instance VI_SS {
state MASTER
interface ens160
virtual_router_id 56
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
}
track_script {
check_haproxy
}
}
vrrp_instance VI_NS {
state MASTER
interface ens160
virtual_router_id 57
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
xxx.xxx.xxx.xxx
}
track_script {
check_haproxy
}
}`
haproxy globals:
`global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2 debug
tune.chksize 32768 #don't get me started...dev requirement because of antiquated requirement not coded away
tune.bufsize 32768 #refer to previous statement
tune.ssl.default-dh-param 2048
max-spread-checks 20000
tune.maxpollevents 10000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 40000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats`
defaults:
`defaults
mode http
log global
option httplog
option log-health-checks
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 60000
timeout connect 10s
timeout client 60000
timeout server 60000
timeout http-keep-alive 30s
timeout check 30s
maxconn 30000
errorfile 503 /etc/haproxy/errorfiles/503.http`
The answer was a bit silly. Internal DNS to the load balancer was incorrect, so remoting to it was impossible until I tried to ssh into the machine during a period where the website was throwing these errors. Turns out that the old load balancer had the ip addresses as part of the network scripts (ie /etc/sysconfig/network-scripts/ifcfg-eth0:0-20).
So, the new instances would work when I restarted keepalived because it would take the ip addresses and the old instance would take them back (subsequently causing a failure because the old instance didn't have the entry in it).
I stopped haproxy on the old instance, removed the /etc/sysconfig/network-scripts/ifcfg-eth0:* files from the old server, restarted keepalived on the new cluster and everything is working as it should.
Feeling a little stupid right now.

Haproxy set header and path for a incoming https request

I have a domain name https://abc.example.com/a/b/c, I have to convert the URL to https://example.com/abc/a/b/c
To achieve it I have done below configuration in the frontend
mode http
http-request set-var(req.rewrite_repo) req.hdr(host),lower,regsub(\.example\.com$,) if { hdr_end(host) -i .example.com }
http-request set-path %[var(req.rewrite_repo)]/v2%[path] if { var(req.rewrite_repo) -m found }
http-request set-header Host example.verizon.com if { var(req.rewrite_repo) -m found }
Now if I give http://abc.example.com it is working fine. But if I give https://abc.example.com it is not working. Could you please help me with it?

How to setup Varnish to work with Apache and Tomcat

I have a Ubuntu 12.0 server running Varnish 4 on port 80 and the Apache 2.4 in 8080.
I installed Tomcat 7 running on port 8181, which runs only one Liferay site.
I would like to configure Varnish to work with Tomcat also.
How do I set this up?
My current setup is this:
/etc/default/varnish
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
/etc/varnish/default.vcl
backend default {
.host = "123.456.789.000";
.port = "8080";
.connect_timeout = 580s;
.first_byte_timeout = 580s;
.between_bytes_timeout = 580s;
}
If I point my browser to 123.456.789.000:8181 the Tomcat site works. I will set the registar with my DNS to respond to "www.mytomcatsite.com", but how can I avoid the ":8181" on the URL ?
With Apache everything works fine.
TIA.
From the varnish documentation:
We add a new backend.:
backend java {
.host = "127.0.0.1";
.port = "8000";
}
Now we need tell Varnish where to send the difference URL. Lets look at vcl_recv.:
sub vcl_recv {
if (req.url ~ "^/java/") {
set req.backend_hint = java;
} else {
set req.backend_hint = default;
}
}
If you want this routing to be done on the basis of virtual hosts you just need to inspect req.http.host:
sub vcl_recv {
if (req.http.host ~ "foo.com") {
set req.backend_hint = foo;
} elsif (req.http.host ~ "bar.com") {
set req.backend_hint = bar;
}
}
See:
https://www.varnish-cache.org/docs/trunk/users-guide/vcl-backends.html#multiple-backends
https://www.varnish-cache.org/docs/trunk/users-guide/vcl-backends.html#backends-and-virtual-hosts-in-varnish
Note: This is for Varnish 4. The VCL syntax will be slightly different for Varnish 3.

Meteor on httpd (Apache/2.4.6 CentOS) proxy and WebSockets

I can't workout how to get WebSockets to work when I deploy my meteor app online. I keep getting this error:
WebSocket connection to 'ws://website.com/sockjs/***/********/websocket' failed: Unexpected response code: 400
I think this is due to the fact that apache sits in front of my meteor app. I know Apache 2.4 had a bug to make ws:// working, but I think this should be resolved by modules/mod_proxy_wstunnel.so, which I have enabled (of course I have enabled also modules/mod_proxy.so)
Here's my config. I'm running Meteor 1.2.1 as a systemd service (/etc/systemd/system/meteor.service) like so:
[Unit]
Description=meteor nodejs daemon
After=network.target remote-fs.target
[Service]
User=root
ExecStart=/usr/bin/node /home/root/www/main.js
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=meteor
Environment=ROOT_URL=http://website.com
Environment=PORT=3000
Environment=NODE_ENV=production
Environment=MONGO_URL=mongodb://127.0.0.1:27017/meteor
[Install]
WantedBy=multi-user.target
This is the output of httpd -v
Server version: Apache/2.4.6 (CentOS)
Server built: Aug 28 2015 22:11:18
And this is the relevant part in my vhost config (/etc/httpd/conf/httpd.conf) for website.com:
<VirtualHost my.ser.ver.ip:8080>
ServerName website.com
ServerAlias www.website.com
ProxyRequests Off
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
<Proxy *>
Allow from all
</Proxy>
</VirtualHost>
I've already tried to add the RewriteCond as suggested here but no success...
Any idea? I'm also having issue getting oauth to work with the accounts-facebook package and I guess the problem is for the same reason? As in, there is something wrong in my proxy settings?
Solved the mystery. Of course it was my bad: I forgot all about Varnish.
I had Varnish set on port 80 forwarding the request to Apache, which was in turn proxying the request to node.js. I resolved by removing apache and thus configuring Varnish to serve straight to node.js for that specific domain.
This is what I did:
Implemented this default.vcl in /etc/varnish/
Removed import directors and all the content inside sub vcl_init {} (as I only have a single server)
Replaced set req.backend_hint = vdir.backend(); in sub vcl_recv {} with:
if (req.http.Host ~ "^(www\.)?website.com") {
set req.backend_hint = nodejs;
} else {
set req.backend_hint = apache;
}
Created the two backends like so:
backend apache {
.host = "127.0.0.1";
.port = "8080";
.max_connections = 300;
.probe = {
.request =
"HEAD / HTTP/1.1"
"Host: localhost"
"Connection: close";
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3;
}
.first_byte_timeout = 300s;
.connect_timeout = 5s;
.between_bytes_timeout = 2s;
}
backend nodejs {
.host = "127.0.0.1";
.port = "3000";
.connect_timeout = 1s;
.first_byte_timeout = 2s;
.between_bytes_timeout = 60s;
.max_connections = 800;
}