lighttpd reverse proxy - all responses are blank pages - reverse-proxy

I have a lighttpd server (1.4.35) configured as a reverse proxy in front of a Play Framework (v.1.2.5.3) application running on the same server and it is working fine. Here's the lighttpd config:
$SERVER["socket"] == ":8009" {
proxy.debug = 1
proxy.server = ("" => (("host" => "127.0.0.1", "port" => 9000 )))
}
I want to move the Play app to another server but when I try to access it via lighttpd, any requests I make are returning w/ status=200, content-length=0 & a blank page. The new lighttpd config is:
$SERVER["socket"] == ":8009" {
proxy.debug = 1
proxy.server = ("" => (("host" => "10.10.1.102", "port" => 9000 )))
}
I have tested that the Play application is accessible from the server lighttpd is running on (e.g. wget http://10.10.1.102:9000 from the lighttpd server returns the correct content).
Lighttpd's proxy.debug output is:
when accessing locally:
2014-12-14 07:43:56: (mod_proxy.c.1144) proxy - start
2014-12-14 07:43:56: (mod_proxy.c.1185) proxy - ext found
2014-12-14 07:43:56: (mod_proxy.c.1319) proxy - found a host 127.0.0.1 9000
2014-12-14 07:43:56: (mod_proxy.c.398) connect delayed: 13
2014-12-14 07:43:56: (mod_proxy.c.1000) proxy: fdevent-out 1
2014-12-14 07:43:56: (mod_proxy.c.1029) proxy - connect - delayed success
2014-12-14 07:43:56: (mod_proxy.c.969) proxy: fdevent-in 4
2014-12-14 07:43:56: (mod_proxy.c.667) proxy - have to read: 2785
2014-12-14 07:43:56: (mod_proxy.c.969) proxy: fdevent-in 4
2014-12-14 07:43:56: (mod_proxy.c.667) proxy - have to read: 0
when accessing 10.10.1.102:9000
2014-12-14 07:42:42: (mod_proxy.c.1144) proxy - start
2014-12-14 07:42:42: (mod_proxy.c.1185) proxy - ext found
2014-12-14 07:42:42: (mod_proxy.c.1319) proxy - found a host 10.10.1.102 9000
2014-12-14 07:42:42: (mod_proxy.c.398) connect delayed: 10
2014-12-14 07:42:42: (mod_proxy.c.1000) proxy: fdevent-out 1
2014-12-14 07:42:42: (mod_proxy.c.1029) proxy - connect - delayed success
2014-12-14 07:42:42: (mod_proxy.c.969) proxy: fdevent-in 4
2014-12-14 07:42:42: (mod_proxy.c.667) proxy - have to read: 0
Any ideas why lighttpd isn't getting any content from the second server?

For those who find this question and have the same problem .... I switched to nginx and had it running in 15 minutes. Not sure why lighttpd wasn't able to do the same.

Related

Authentication failure with OAuth to Traefik dashboard

I am trying to get Traefik setup in a Docker and am having a heck of a time. Following this guide and using Cloudflare (DNS only to trafeik.mydomain.com), to connect, I am getting "This site can't be reached oauth.mydomain.com's server IP address could not be found".
wget https://traefik.mydomain.com/dashboard
--2020-09-26 19:19:38-- https://traefik.mydomain.com/dashboard
Resolving traefik.mydomain.com (traefik.mydomain.com)... <ip address>
Connecting to traefik.mydomain.com (traefik.mydomain.com)|<ip address>|:443... connected.
HTTP request sent, awaiting response... 307 Temporary Redirect
Location: https://accounts.google.com/o/oauth2/auth?client_id=6597174190-33npvgec044jtcrj4scmfgt561.apps.googleusercontent.com&prompt=select_account&redirect_uri=https%3A%2F%2Foauth.mydomain.com%2F_oauth&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state=fc4c85a1e11f4914247d1e7c95b031%3Agoogle%3Ahttps%3A%2F%2Ftraefik.mydomain.com%2Fdashboard [following]
--2020-09-26 19:19:38-- https://accounts.google.com/o/oauth2/auth?client_id=6597114190-33npkhvge44jtcrj4scmuafgt561.apps.googleusercontent.com&prompt=select_account&redirect_uri=https%3A%2F%2Foauth.mydomain.com%2F_oauth&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state=fc4c85a1e11f4914247d1e7c94a5b031%3Agoogle%3Ahttps%3A%2F%2Ftraefik.mydomain.com%2Fdashboard
Resolving accounts.google.com (accounts.google.com)... 172.217.1.205, 2607:f8b0:400f:805::200d
Connecting to accounts.google.com (accounts.google.com)|172.217.1.205|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://accounts.google.com/AccountChooser?oauth=1&continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSHE1cMVuaeTQ61pcXpMEfDhbxN02IAjg5jH0GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTlraXSxx_7f8vhJteA8UjJoKSqgeUvFWns_BdFn8z73XALchawMrWA1vVl0xJYpUYHUXxD3K0zl4TbcgpVOljSfZM0vkQAHwTm54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OJ8HiHfRVlBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBp4xLi6ABqcrciv_Y0TAFuuwwotfgqrl1_uMHfyX9KJogk_gntcEiG2489OMNwFinOVAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRD7SMyhvnVe4Bj-%26as%3DS-2012888342%253A160116957872%23 [following]
--2020-09-26 19:19:38-- https://accounts.google.com/AccountChooser?oauth=1&continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSk6oHE1cMVRfegIuaeTQ61pcXpMEfD2FXah02IAjg5GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTSxx_7f8vhJteA8UjJoKSqgeUvFWns_BdFn8z73XALchawMrWA1vVXbAl0xJYpUYHUXzl4TbcgpVOljSfZM0vkQAHwTmhD54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OJ8HiHfRVlBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBpABqcrciv_Y0TAFuuwwotfgqrl1_uMHfyX9KJogk_gntcEiG2489OMNwFAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRDBteJ7SMyhvnVe4Bj-%26as%3DS-2012888342%253A1601169578900872%23
Reusing existing connection to accounts.google.com:443.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://accounts.google.com/ServiceLogin?continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSkE1cMVRIuaeTQ61ppMEfD2FXahbxN02IAjg5jH0GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTlraXSxx_7f8vhJteA8UjJoKSqgs_BdFn8z73XALchawMrWA1vVXbAl0xJYpUYHUXxD3K0zl4TbcgpVOljSfZM0vkQAHwTmhD54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OJ8HiHBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBp4xLi6ABqcrciv_Y0TAFuuwwotfgqrl1_uMHfyX9KJntcEiG2489OMNwFinOVAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRDBteJ7SMyhvnVe4Bj-%26as%3DS-2012888342%253A1601178900872%23&sacu=1&oauth=1&rip=1 [following]
--2020-09-26 19:19:39-- https://accounts.google.com/ServiceLogin?continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSk6ogIuaeTQ61pcXpMEfD2FXahbxN02IAjg5jH0GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTlraXSxx_7f8vhJteA8UjJoKSqgeUvFWns_BdFn8z73XALchawMrWA1vVXbAl0xJYpUYHUXxD3K0zl4jSfZM0vkQAHwTmhD54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OVlBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBp4xLi6ABqcrciv_Y0TAFuuwwotfgqrl1_uKJogk_gntcEiG2489OMNwFinOVAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRDBteJ7SMyhvnVe4Bj-%26as%3DS-2012888342%253A1601169578900872%23&sacu=1&oauth=1&rip=1
Reusing existing connection to accounts.google.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘dashboard.1’
dashboard.1 [ <=> ] 58.82K --.-KB/s in 0.03s
2020-09-26 19:19:39 (1.64 MB/s) - ‘dashboard.1’ saved [60236]
The Docker log says:
level=debug msg="Remote error http://oauth:4181. StatusCode: 307"
middlewareType=ForwardedAuthType middlewareName=middlewares-oauth#file
This is my docker-compose.yml file:
version: "3.3"
########################### NETWORKS
networks:
t2_proxy:
external:
name: t2_proxy
default:
driver: bridge
########################### SERVICES
services:
# All services / apps go below this line
# Traefik 2 - Reverse Proxy
traefik:
container_name: traefik
image: traefik:chevrotin # the chevrotin tag refers to v2.2.x but introduced a breaking change in 2.2.2
restart: unless-stopped
command: # CLI arguments
- --global.checkNewVersion=true
- --global.sendAnonymousUsage=false
- --entryPoints.http.address=:80
- --entryPoints.https.address=:443
# Allow these IPs to set the X-Forwarded-* headers - Cloudflare IPs: https://www.cloudflare.com/ips/
- --entrypoints.https.forwardedHeaders.trustedIPs=173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22
- --entryPoints.traefik.address=:8080
- --api=true
# - --api.insecure=true
# - --serversTransport.insecureSkipVerify=true
- --log=true
- --log.level=DEBUG # (Default: error) DEBUG, INFO, WARN, ERROR, FATAL, PANIC
- --accessLog=true
- --accessLog.filePath=/traefik.log
- --accessLog.bufferingSize=100 # Configuring a buffer of 100 lines
- --accessLog.filters.statusCodes=400-499
- --providers.docker=true
- --providers.docker.endpoint=unix:///var/run/docker.sock
- --providers.docker.defaultrule=Host(`{{ index .Labels "com.docker.compose.service" }}.$DOMAINNAME`)
- --providers.docker.exposedByDefault=false
- --providers.docker.network=t2_proxy
- --providers.docker.swarmMode=false
- --providers.file.directory=/rules # Load dynamic configuration from one or more .toml or .yml files in a directory.
# - --providers.file.filename=/path/to/file # Load dynamic configuration from a file.
- --providers.file.watch=true # Only works on top level files in the rules folder
# - --certificatesResolvers.dns-cloudflare.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory # LetsEncrypt Staging Server - uncomment when testing
- --certificatesResolvers.dns-cloudflare.acme.email=$CLOUDFLARE_EMAIL
- --certificatesResolvers.dns-cloudflare.acme.storage=/acme.json
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.provider=cloudflare
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.resolvers=1.1.1.1:53,1.0.0.1:53
# networks:
# t2_proxy:
# ipv4_address: 192.168.90.254 # You can specify a static IP
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- $DOCKERDIR/traefik2/rules:/rules
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/traefik2/acme/acme.json:/acme.json # cert location - you must touch this file and change permissions to 600
- $DOCKERDIR/traefik2/traefik.log:/traefik.log # for fail2ban - make sure to touch file before starting container
- $DOCKERDIR/shared:/shared
environment:
- CF_API_EMAIL=$CLOUDFLARE_EMAIL
- CF_API_KEY=$CLOUDFLARE_API_KEY
labels:
- "traefik.enable=true"
# HTTP-to-HTTPS Redirect
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# HTTP Routers
- "traefik.http.routers.traefik-rtr.entrypoints=https"
- "traefik.http.routers.traefik-rtr.rule=Host(`traefik.$DOMAINNAME`)"
- "traefik.http.routers.traefik-rtr.tls=true"
# - "traefik.http.routers.traefik-rtr.tls.certResolver=dns-cloudflare" # Comment out this line after first run of traefik to force the use of wildcard certs
- "traefik.http.routers.traefik-rtr.tls.domains[0].main=$DOMAINNAME"
- "traefik.http.routers.traefik-rtr.tls.domains[0].sans=*.$DOMAINNAME"
# - "traefik.http.routers.traefik-rtr.tls.domains[1].main=$SECONDDOMAINNAME" # Pulls main cert for second domain
# - "traefik.http.routers.traefik-rtr.tls.domains[1].sans=*.$SECONDDOMAINNAME" # Pulls wildcard cert for second domain
## Services - API
- "traefik.http.routers.traefik-rtr.service=api#internal"
## Middlewares
- "traefik.http.routers.traefik-rtr.middlewares=chain-oauth#file"
# Google OAuth - Single Sign On using OAuth 2.0
oauth:
container_name: oauth
image: thomseddon/traefik-forward-auth:latest
restart: unless-stopped
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
environment:
- CLIENT_ID=$GOOGLE_CLIENT_ID
- CLIENT_SECRET=$GOOGLE_CLIENT_SECRET
- SECRET=$OAUTH_SECRET
- COOKIE_DOMAIN=$DOMAINNAME
- INSECURE_COOKIE=false
- AUTH_HOST=oauth.$DOMAINNAME
- URL_PATH=/_oauth
- WHITELIST=$MY_EMAIL
- LOG_LEVEL=debug
- LOG_FORMAT=text
- LIFETIME=2592000 # 30 days
- DEFAULT_ACTION=auth
- DEFAULT_PROVIDER=google
labels:
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.oauth-rtr.entrypoints=https"
- "traefik.http.routers.oauth-rtr.rule=Host(`oauth.$DOMAINNAME`)"
- "traefik.http.routers.oauth-rtr.tls=true"
## HTTP Services
- "traefik.http.routers.oauth-rtr.service=oauth-svc"
- "traefik.http.services.oauth-svc.loadbalancer.server.port=4181"
## Middlewares
- "traefik.http.routers.oauth-rtr.middlewares=chain-oauth#file"
Finally, the end of my middlewares.toml file looks like this:
[http.middlewares.middlewares-oauth]
[http.middlewares.middlewares-oauth.forwardAuth]
address = "http://oauth:4181" # Make sure you have the OAuth service in docker-compose.yml
trustForwardHeader = true
authResponseHeaders = ["X-Forwarded-User"]
I searched around and checked everything I found already suggested but no luck. Seems like it's gotta be something small though.
In Cloudflare, I changed oauth.mydomain.com from "Proxied" to "DNS Only" and now I am no longer getting redirected.

pagekite.py [flying] DynDNS updates may be incomplete, will retry

I am trying to make my localhost:80 available on the internet using pagekite with config at ~/.pagekite.rc:
## NOTE: This file may be rewritten/reordered by pagekite.py.
#
##[ Default kite and account details ]##
kitename = myemail#gmail.com
kitesecret = my_kite_secret
##[ Front-end settings: use pagekite.net defaults ]##
defaults
##[ Back-end service examples ... ]##
#
service_on = https:asldkjdk39090.pagekite.me:localhost:80:my_kite_secret
END
I run pagekite:
# pagekite.py
>>> Hello! This is pagekite.py v0.5.9.3. [CTRL+C = Stop]
Connecting to front-end relay 54.84.55.54:443 ...
- Protocols: http http2 http3 https websocket irc finger httpfinger raw
- Protocols: minecraft
- Ports: 79 80 443 843 2222 3000 4545 5222 5223 5269 5670 6667 8000 8080
- Ports: 8081 8082 8083 9292 25565
- Raw ports: virtual
~<> Flying localhost:80 as https://asldkjdk39090.pagekite.me/
Trying localhost:80 as https://asldkjdk39090.pagekite.me/
<< pagekite.py [flying] DynDNS updates may be incomplete, will retry...
Then I request https://asldkjdk39090.pagekite.me/ and it gives an error:
$ curl https://asldkjdk39090.pagekite.me/
curl: (6) Could not resolve host: asldkjdk39090.pagekite.me
I don't clearly understand why it's not working and how to fix it. I expect that pagekite pass request to my localhost:80 when I request https://asldkjdk39090.pagekite.me/ but it doesn't.
Update
With this config it's working:
## NOTE: This file may be rewritten/reordered by pagekite.py.
#
##[ Default kite and account details ]##
kitename = my_kite_name
kitesecret = my_kite_secret
##[ Front-end settings: use pagekite.net defaults ]##
defaults
##[ Back-end service examples ... ]##
#
service_on = http:my_kite_name.pagekite.me:localhost:80:my_kite_secret
END
Where my_kite_name is the name I created on settings page.
Then curl https://my_kite_name.pagekite.me/ redirects properly to my localhost
So it's working for pre-created names and not working for a random name like asldkjdk39090 which I want to use as a subdomain on the fly without registering it on the settings page.
On-the-fly subdomains aren't supported by pagekite.net.
You always have to pre-register, either using the website or the built-in registration tool in pagekite.py itself. Unfortunately, on some modern distros the built-in pagekite.py registration is currently broken because our API server is obsolete and modern versions of OpenSSL refuse to connect to it.
We are working on fixing that, obviously, but it will take some time because of dependencies.

How can I setup a proxy in a Selenium Chrome container?

I have a docker-compose.yml file with well-known environment variables to reach our corporate proxy:
---
version: '2.2'
# adapted from:
# https://github.com/SeleniumHQ/docker-selenium/wiki/Getting-Started-with-Docker-Compose
# docker-compose --force-recreate
services:
chrome:
privileged: True
image: "selenium/standalone-chrome:3.11.0"
ports:
- "4444:4444"
volumes:
- /dev/shm:/dev/shm
environment:
- TZ="UT"
- http_proxy=http://proxy.lan:8080
- https_proxy=http://proxy.lan:8080
- no_proxy=
#- SE_OPTS=-Dhttp.proxyHost=proxy.lan -Dhttp.proxyPort=8080
network_mode: host
When I run wget in the container, then the proxy is used as expected.
--2018-05-02 12:30:45-- http://google.com/
Resolving proxy.lan (proxy.lan)... 192.168.33.141
Connecting to proxy.lan (proxy.lan)|192.168.33.141|:8080... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]
--2018-05-02 12:30:45-- http://www.google.com/
Reusing existing connection to proxy.lan:8080.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/dev/null’
/dev/null [ <=> ] 10.81K --.-KB/s in 0s
2018-05-02 12:30:46 (310 MB/s) - ‘/dev/null’ saved [11073]
However, when I try to run the container with SE_OPTS="-Dhttp.proxyHost=proxy.lan -Dhttp.proxyPort=8080", then I see a stacktrace:
Exception in thread "main" com.beust.jcommander.ParameterException: Was passed main parameter '"-Dhttp.proxyHost=proxy.lan' but no main parameter was defined in your arg class.
There is an unmerged PR, but I fear the urgency to use a proxy in testing in corporate environment might not be felt by the Se dev. Maybe there is an alternative solution.

How to configure Knox to make it pass Authorization header to a backend service?

As discussed in my other question there is no support for websockets authentication in Knox, but as a temporary solution we could handle authentication in our backend service. Our test has shown however that Knox does not pass Authorization header to the backend.
[client]$ curl -i -u '<user>:<password>' https://knox-server/gateway/default/myservice/ping
# 8090 is our backend port
[knox-server]$ ngrep -W byline port 8090
interface: eth0
filter: ( port 8090 ) and ((ip || ip6) || (vlan && (ip || ip6)))
#
T <knox-server>:59118 -> <myservice>:8090 [AP]
GET /ping?doAs=<user> HTTP/1.1.
X-Forwarded-For: <client>.
X-Forwarded-Proto: https.
X-Forwarded-Port: 443.
X-Forwarded-Host: <knox-server>.
X-Forwarded-Server: <knox-server>.
X-Forwarded-Context: /gateway/default.
User-Agent: curl/7.54.0.
Accept: */*.
Host: <myservice>:8090.
Connection: Keep-Alive.
Accept-Encoding: gzip,deflate.
.
#
T <myservice>:8090 -> <knox-server>:59118 [AP]
HTTP/1.1 200 OK.
Date: Sat, 14 Oct 2017 14:27:58 GMT.
X-Application-Context: myservice:prod:8090.
Content-Type: text/plain;charset=utf-8.
Content-Length: 4.
.
PONG
How should I configure Knox (0.12.0 from HDP 2.6.2) to make it pass Authorization header to the backend for websocket connection?
While writing this question I realised that there is a ticket KNOX-895 resolving the issue of passing cookies and headers to a backend service in Knox 0.14.0.
[EDIT]
I cloned knox git repo (commit 92b1505a), which includes KNOX-895 (2d236e78), run it locally with added websocket service to sandbox topology.
[tulinski]$ wscat -n --auth 'user:password' -c wss://localhost:8443/gateway/sandbox/echows
[tulinski]$ sudo ngrep -W byline host echo.websocket.org
#
T 192.168.0.16:59952 -> 174.129.224.73:80 [AP]
GET / HTTP/1.1.
Host: echo.websocket.org.
Upgrade: websocket.
Connection: Upgrade.
Sec-WebSocket-Key: Z4Qa9Dxwr6Qvq2QAicsT5Q==.
Sec-WebSocket-Version: 13.
Pragma: no-cache.
Cache-Control: no-cache.
Authorization: Basic dXNlcjpwYXNzd29yZA==.
.
##
T 174.129.224.73:80 -> 192.168.0.16:59952 [AP]
HTTP/1.1 101 Web Socket Protocol Handshake.
Connection: Upgrade.
Date: Mon, 16 Oct 2017 14:23:49 GMT.
Sec-WebSocket-Accept: meply+6cIyjbH+Vk2OsAqKJDWic=.
Server: Kaazing Gateway.
Upgrade: websocket.
.
Authorization header is passed to the backend service.

How to configure apache server to allow wget with proxy?

I'm totally new to the apache httpd stuff
I setup my host ServerHost1 as a file server with httpd
# httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built: Dec 2 2014 08:09:42
I have put the file TestFile.txt under /var/www/html/TestDir/TestFile.txt
I modified part of the httpd.conf as follow
<Directory />
Order deny,allow
Allow from all
</Directory>
On a test host TestHost1 with full Internet access, I can downloaded my file with wget
TestHost1]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:39:12-- http://ServerHost1/TestDir/TestFile.txt
Resolving ServerHost1 (ServerHost1)... <IP address>
Connecting to ServerHost1 (ServerHost1)|<IP address>|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2859976598 (2.7G) [application/octet-stream]
Saving to: ‘TestFile.txt’
2% [> ] 60,645,376 24.0MB/s
On the host sitting on a semi-isolated network TestHost2, I have to use proxy for wget to work. It works fine with google
TestHost2]# wget google.ca
--2016-03-17 13:53:26-- http://google.ca/
Resolving proxy.com (proxy.com)... <ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.ca/ [following]
--2016-03-17 13:53:26-- http://www.google.ca/
Reusing existing connection to proxy.com:3128.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
[ <=> ] 19,928 --.-K/s in 0.1s
2016-03-17 13:53:27 (159 KB/s) - ‘index.html’ saved [19928]
However when I try to get my file from ServerHost1, it gets ERROR 503: Service Unavailable
TestHost2]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:57:13-- http://ServerHost1/TestDir/TestFile.txt
Resolving proxy.com (proxy.com)...<ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 503 Service Unavailable
2016-03-17 13:57:13 ERROR 503: Service Unavailable.
So the question is
(1) Why am I seeing 503 ServiceUnavailable when the file is apparently available (since I can downloaded from testhost1)?
(2) How do I configure my httpd.conf file so that TestHost2 can wget the file from ServerHost1?
Maybe try with ProxyRequests as described in Apache docs https://httpd.apache.org/docs/2.4/mod/mod_proxy.html