Bypass Transparent Squid With IPTables - iptables

I'm using Squid Proxy for (DNS Filtering), I have configured squid proxy behind my GCP Cloud NAT in transparent mode to intercept HTTP and HTTPS Web Traffic, I have added only below rules to redirect HTTP and HTTPS traffic to squid.
iptables -t nat -A PREROUTING -s 0.0.0.0/0 -p tcp --dport 80 -j REDIRECT --to-port 3129
iptables -t nat -A PREROUTING -s 0.0.0.0/0 -p tcp --dport 443 -j REDIRECT --to-port 3130
But as I have learned so far Squid is a web proxy and only handling HTTP, HTTPS, TCP & FTP Requests, Squid does't understand SMTP,UDP and any other protocol request, but above iptables rules only working for HTTP and HTTPS rest of the thing are getting block like SMTP & UDP Request. As I understand we can't tell squid to handle SMTP and UDP Request therefore I only want to handle HTTP and HTTPS traffic on squid and rest of the traffic directly redirected or bypass to my GCP Cloud NAT.
Can anybody help me which iptables rule I need to write to only redirect port 80, 443 to Squid, and rest of the port request I want to bypass or redirect directly to my GCP Cloud NAT
I only want to redirect port 80, 443 to squid and rest of the port I want to bypass or redirected to GCP Cloud NAT.

Related

Is there a way to use a forwarding proxy as a backend (including authentication) in HAProxy?

I am quite new to HAProxy and want to achieve the following setup / packet flow:
Client -> HAProxy (as reverse proxy) -> Forwarding Proxy (HAProxy, IIS, Squid...) -> Internet -> server.example.com
I would like to have encrypted connections with TLS/SSL from the Client -> HAProxy and from HAProxy -> server.example.com
This means, that the forwarding proxy needs to support the HTTP CONNECT method, to establish a TCP tunnel and transmits packets without trying to interpret them. Over this TCP port I should be able to send bytes to server.example.com - TLS/SSL encrypted, so HTTPS.
Furthermore it could be, that authentication against the forwarding proxy is needed e.g. HTTP Basic authentication.
The software stack of my test setup is as follows:
Client Firefox 76.0.1
HA-Proxy version 1.6.3 2015/12/25
Squid Cache: Version 3.5.27
I have setup this HAProxy configuration:
global
# Standard settings
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
# Tuning
maxconn 2000
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
ssl-default-bind-options no-sslv3
# Ensure a secure enough DH paramset
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
# Redispatch 503 errors on backends, reducing the number of 503 reaching clients
option redispatch
# We want to stall clients for as long as possible, before giving
# up with 503:
timeout connect 5m
# Clients must be acceptably responsive
timeout client 1m
# Server not as much...
timeout server 5m
# HTTPS server
frontend https-in
bind :443 ssl crt-ignore-err all crt /etc/haproxy/ssl/certkey.pem
# Don't serve HTTP directly, but redirect to same URL in https
redirect scheme https code 301 if !{ ssl_fc }
default_backend backend-proxy
backend backend-proxy
# Create the Authorization / Proxy-Authorization header value
# echo -n "user:password" | base64
http-request add-header Proxy-Authorization "Basic dXNlcjpwYXNzd29yZA=="
# We need to use the CONNECT method
http-request set-method CONNECT
# The proxyserver needs to know the full server name
http-request set-path server.example.com:443
server proxy 192.168.1.1:8080
In my test setup I use a Squid server as a forwarding proxy with the following configuration:
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/squid-passwd
acl basic_client proxy_auth REQUIRED
http_access deny !basic_client
http_access allow basic_client
http_access deny all
http_port 8080
coredump_dir /var/spool/squid
Using the Squid forwarding proxy in a regular browser on the same subnet including authentication works fine.
So when the first request comes in from the Client at the HAProxy it gets forwarded via the backend backend-proxy to the Forward Proxy (Squid). The CONNECT succeeds, as I see in the Squid log.
1591166403.966 60146 192.168.1.10 TCP_TUNNEL/200 39 CONNECT server.example.com:443 test HIER_DIRECT/6.7.8.9 -
(IP addresses were replaced with generic values)
The HAProxy log shows, that the correct backend is used:
Jun 3 08:39:57 localhost haproxy[3547]: 192.168.1.20:39398 [03/Jun/2020:08:38:56.855] https-in~ backend-proxy/proxy 154/0/13/209/60375 200 39 - - ---- 0/0/0/0/0 0/0 "GET /someurl HTTP/1.1"
(IP addresses were replaced with generic values)
So far so good. But I am unable to establish a successful communication to server.example.com from my Client. I think I have to use a second/other backend, which will not mangle the requests any more (exchange method and path) but instead use the given TCP port from the forwarding proxy to transmit the request.
How can I save the 'state' of the communication to my backend / proxy server in HAProxy, so the request could be resend to another backend?
How to extract and use the TCP port from the response of the Forwarding Proxy?
Is there a way to check, if the TCP tunnel on the Forwarding Proxy is still opened or do I need to request it using CONNECT every time before I want to use it?
EDIT:
I solved the situation by using stunnel as an intermediary to handle the TCP tunnel creation with CONNECT against the Forwarding Proxy.
If you have an upstream HTTP proxy (like squid) (not a socks proxy) and you want to have haproxy accept connections and open the tunnel thru the upstream proxy (such that the haproxy clients do not support doing so themselves via http/CONNECT method) on behalf of the clients, then this functionality does not exist in haproxy today.
I crated a branch that does this via server keyword proxy-tunnel.
The below example config will behave such that clients of haproxy that connect to port 20025 (haproxy) will result in haproxy establishing a http/CONNECT tunnel via the upstream proxy 172.16.0.99:50443 to 172.16.0.2:2023:
listen SMTP-20025
bind 0.0.0.0:20025
server TEST_SERVERVIA_PROXY 172.16.0.2:2023 proxy-tunnel 172.16.0.99:50443

Haproxy TLS terminating and passthrough based on sni

I have similar path for the requests:
client mydomain.com -> nlb:443 -> haproxy -> cloudfront
client a.mydomain.com -> nlb:443 -> haproxy -> target_group_a
Main idea is do tls passthrough for the main domain name and send it to cloudfront without TLS termination. Requests into a.mydomain.com should pass to target_group_a and it should terminate tls. So my config for this is:
frontend main
bind *:443
mode tcp
option tcplog
log global
tcp-request inspect-delay 5s
acl is_main req_ssl_sni -i "${pDomainName}"
acl is_a req_ssl_sni -m beg "a"
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend main if is_main
use_backend a if is_a
backend main
mode tcp
option ssl-hello-chk
server cloudfront "${pCloudFrontUrl}:443" check resolvers aws
backend a
mode tcp
server local 127.0.0.1:9666 send-proxy
frontend a
bind *:9666 ssl crt server.pem ca-file ca.pem verify required accept-proxy
mode http
default_backend proxy_a
backend proxy_a
mode http
server elb "${pServer}:80" check resolvers aws
Main record pass successfull and I get CloudFront SSL termination and everything is okay, but not for a.mydomain.com.
Also I tried to watch what SNI Haproxy is capture but I got only capture0: - in logs. I did like (right after tcp inspect line)
tcp-request content capture req_ssl_sni len 15
log-format "capture0: %[capture.req.hdr(0)]"
and it's strange because routing works.
I've tried a lot of possibilities.. For now I get SSL peer handshake failed, the server most likely requires a client certificate to connect error, but if I do listen frontend a on another port and in http mode everyting works fine.
Maybe I miss something basic or not, but I'm stuck on it for ages and maybe someone could help me.
For someone who is suffering or will suffer with that situation, just be sure that you are testing with gnu version of curl (or build it with properly libraries) because it doesn't work for me with BSD curl. My curl version and libs
curl 7.66.0 (x86_64-apple-darwin17.7.0) libcurl/7.66.0 SecureTransport zlib/1.2.11
Release-Date: 2019-09-11
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets

Full SSH proxy over a reverse SSH connection

There are tools like sshuttle that allow proxying of all traffic through an outbound SSH connection. In our case, we can only SSH into the server, and would like to then proxy all outbound traffic over the inbound SSH connection. Is this possible?
It's possible using ssh port forwarding.In this scenario let's say you can ssh login into b, the command
ssh -L2001:HostD:143 user#HostB will use the ssh tunnel to forward all the outbound traffic from your port 2001 to the port 143 of the HostD , And as you can see ,you can replace HostD by by localhost ,that would made it instead forward the traffic to the port 143 of HostB
Image credit goes to https://www.youtube.com/watch?v=JKrO5WABdoY&t=251s

SSL Will not Authenticate for domain of

I am running an Ubuntu 12.04 server with Apache 2.2 and OpenSSL 1.01(recent). I am attempting to serve a self signed certificate across for HTTPS browsing. The server is also running webmin, and a tomcat application server.
Currently HTTPS requests do not work for the primary server, returning an er_connection_refused.
I am currently using virtual hosts to specify locations for https connections. HTTPS only works for my webmin portal and not for any other location on the webserver. I had assumed this was a port conflict between miniserv and apache, however there doesn't appear to be any conflict that I can determine. I have checked for other possible webservers that may be using SSL (such as jetty or nginx) but there doesn't appear to be any.
Is there any way to determine which services are associate with which ports. Failing that is there any way to determine which services are currently using SSL.
Thanks in advance.
To find out which services are listening on SSL run:
netstat -tulpn | grep :443
It will generate output like:
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1607/https
It sounds like it could also be a conflict in the way you have set up SSL for the virtual hosts. Often vhost config can be a bit funny if you're sharing the same certificate for multiple vhosts.
Edit:
Also another good one for finding what is using a given port is:
lsof -i :443 | grep LISTEN
Which generates output as:
httpd 1132 apache 5u IPv6 22762080 0t0 TCP *:https (LISTEN)
httpd 3084 apache 5u IPv6 22762080 0t0 TCP *:https (LISTEN)
httpd 3312 apache 5u IPv6 22762080 0t0 TCP *:https (LISTEN)
httpd 3555 apache 5u IPv6 22762080 0t0 TCP *:https (LISTEN)
httpd 3593 apache 5u IPv6 22762080 0t0 TCP *:https (LISTEN)

HAProxy wildcard SSL backend forward issue

I am using HAProxy 1.5-dev21. I have purchased a wildcard SSL (for example: *.foo.com).
I want all traffic from Internet port 443 will redirect to internal network according the domain name, backend servers are many web server running HTTP (for example: abc.foo.com:443 -> 192.168.10.10:80 , edf.foo.com:443 -> 192.168.10.11:80)
However, whatever the incoming domain name, HAProxy passed all traffic to default backend.
My config is working well if I not using SSL
The following is my simplified config file:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
defaults
log global
mode http
option tcplog
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend https-in
mode tcp
bind :443 ssl crt /etc/haproxy/foo.com.pem
use_backend abc if { hdr_end(host) -i abc.foo.com }
use_backend def if { hdr_end(host) -i def.foo.com }
default_backend application-backend
backend abc
mode tcp
server Server1 192.168.10.10:80
backend def
mode tcp
server Server2 192.168.10.11:80
backend application-backend
mode tcp
server server3 192.168.10.12:80
You're using tcp mode while trying to access HTTP content.
Please turn on 'mode http' and it should work.
Baptiste
When you have of SSL, you can't use hdr_end. Here is how i do it:
frontend domain.com
bind 10.50.81.131:443 ssl crt domain.com ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP
bind 10.50.81.131:80
mode http
maxconn 300
option httpclose
option forwardfor
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
use_backend first_farm if { ssl_fc_sni sub1.domain.com }
use_backend second_farm if { ssl_fc_sni sub2.domain.com }
default_backend default_farm