I've set up my Varnish server as follows:
backend web1 {.host = "XXX.XXX.XXX.XXX"; .port = "80";}
backend web2 {.host = "XXX.XXX.XXX.XXX"; .port = "80";}
backend web3 {.host = "XXX.XXX.XXX.XXX"; .port = "80";}
backend web1_ssl {.host = "XXX.XXX.XXX.XXX"; .port = "443";}
backend web2_ssl {.host = "XXX.XXX.XXX.XXX"; .port = "443";}
backend web3_ssl {.host = "XXX.XXX.XXX.XXX"; .port = "443";}
director default_director round-robin {
{ .backend = web1; }
{ .backend = web2; }
{ .backend = web3; }
}
director ssl_director round-robin {
{ .backend = web1_ssl; }
{ .backend = web2_ssl; }
{ .backend = web3_ssl; }
}
# Respond to incoming requests.
sub vcl_recv {
# Set the director to cycle between web servers.
set req.grace = 120s;
if (req.http.X-Forwarded-Proto == "https" ) {
set req.http.X-Forwarded-Port = "443";
set req.backend = ssl_director;
} else {
set req.http.X-Forwarded-Port = "80";
set req.http.X-Forwarded-Proto = "http";
set req.backend = default_director;
}
...
}
This works perfectly if I hit my IP address (without SSL) in the browser, but if I enable Pound (config below):
ListenHTTPS
Address XXX.XXX.XXX.XXX #Local IP of the VarnishWebServer
Port 443
Cert "/etc/apache2/ssl/apache.pem"
AddHeader "X-Forwarded-Proto: https"
HeadRemove "X-Forwarded-Proto"
Service
BackEnd
Address 127.0.0.1
Port 80
End
End
End
I get a 503 everyime I try to hit the local IP address (from varnishlog -0):
11 RxURL c /favicon.ico
11 RxProtocol c HTTP/1.1
11 RxHeader c Host: XXX.XXX.XXX (Varnish Server IP Address)
11 RxHeader c Connection: keep-alive
11 RxHeader c Accept: */*
11 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36
11 RxHeader c Accept-Encoding: gzip,deflate,sdch
11 RxHeader c Accept-Language: en-US,en;q=0.8
11 RxHeader c X-Forwarded-Proto: https
11 RxHeader c X-SSL-cipher: DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD
11 RxHeader c X-Forwarded-For: XXX.XXX.XXX.XXX (My Local machine IP)
11 VCL_call c recv lookup
11 VCL_call c hash
11 Hash c /favicon.ico
11 Hash c 198.61.252.81
11 VCL_return c hash
11 VCL_call c miss fetch
11 Backend c 14 ssl_director web2_ssl
11 FetchError c http read error: -1 0 (Success)
11 VCL_call c error deliver
11 VCL_call c deliver deliver
11 TxProtocol c HTTP/1.1
11 TxStatus c 503
11 TxResponse c Service Unavailable
11 TxHeader c Server: Varnish
...
11 ReqEnd c 1175742305 1391779282.930887222 1391779282.934647560 0.000097752 0.003678322 0.000082016
11 SessionClose c error
I looked at my http listeners and I see this:
root#machine:/etc/apache2/ssl# lsof -i -n|grep http
pound 7947 www-data 5u IPv4 63264 0t0 TCP XXX.XXX.XXX.XXXX:https (LISTEN)
pound 7948 www-data 5u IPv4 63264 0t0 TCP XXX.XXX.XXX.XXXX:https (LISTEN)
varnishd 8333 nobody 7u IPv4 64977 0t0 TCP *:http (LISTEN)
varnishd 8333 nobody 8u IPv6 64978 0t0 TCP *:http (LISTEN)
varnishd 8333 nobody 13u IPv4 65029 0t0 TCP XXX.XXX.XXX.XXXX:37493- >YYYY.YYYY.YYYY.YYYY3:http (CLOSE_WAIT)
apache2 19433 root 3u IPv4 31020 0t0 TCP *:http-alt (LISTEN)
apache2 19438 www-data 3u IPv4 31020 0t0 TCP *:http-alt (LISTEN)
apache2 19439 www-data 3u IPv4 31020 0t0 TCP *:http-alt (LISTEN)
pound 19669 www-data 5u IPv4 31265 0t0 TCP 127.0.0.1:https (LISTEN)
pound 19670 www-data 5u IPv4 31265 0t0 TCP 127.0.0.1:https (LISTEN)
Where XXX.XXX.XXX.XXX is the varnish's WebServer's internal IP address, and YYYY.YYYY.YYYY.YYY is the IP address of one of the backend system defined in the VCL.
Any idea why I keep getting 503s?
UPDATE
As noted Varnish doesn't support SSL, so using Pound can transfer the traffic from 443 to 80, but when it's finished - it can't use port 443 (ssl_diretector) to serve the traffic. Removing the ssl_director and making default_director the primary, worked perfectly.
Varnish does not support HTTPS for its backend requests - any communication between Varnish and Apache must be plain HTTP.
What I found works best is to configure Apache to speak plain HTTP on port 443. This allows Apache to generate correct URLs, such as when it needs to redirect the browser.
Here's how you might configure it:
# Listen on port 443, but speak plain HTTP
Listen X.X.X.X:443 http
# Setting HTTPS=on is helpful for ensuring correct behavior of scripting
# languages such as PHP
SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
<VirtualHost X.X.X.X:443>
# Specifying "https://" in the ServerName ensures that whenever
# Apache generates a URL, it uses "https://your.site.com/" instead
# of "http://your.site.com:443/"
ServerName https://your.site.com
</VirtualHost>
You will of course need to remove any mod_ssl directives from your Apache configuration.
Related
Test gstreamer webrtcbin android example, in local network everything is ok.But cross network, webrtc paused. ice send from android is all typ host.
Got ice server: candidate:1 1 UDP 2015363327 127.0.0.1 42258 typ host index: 0
Got ice server: candidate:2 1 TCP 1015021823 127.0.0.1 9 typ host tcptype active index: 0
Got ice server: candidate:3 1 TCP 1010827519 127.0.0.1 36241 typ host tcptype passive index: 0
Got ice server: candidate:4 1 UDP 2015363583 10.0.2.16 40513 typ host index: 0
Got ice server: candidate:5 1 TCP 1015022079 10.0.2.16 9 typ host tcptype active index: 0
Got ice server: candidate:6 1 TCP 1010827775 10.0.2.16 52791 typ host tcptype passive index: 0
Got ice server: candidate:7 1 UDP 2015363839 10.0.2.15 38413 typ host index: 0
Got ice server: candidate:8 1 TCP 1015022335 10.0.2.15 9 typ host tcptype active index: 0
Got ice server: candidate:9 1 TCP 1010828031 10.0.2.15 42225 typ host tcptype passive index: 0
#define STUN_SERVER " stun-server=stun://47.104.15.123:3478 "
#define TURN_SERVER " turn-server=turn://jianxi:jianxi#47.104.15.123:3478 "
webrtc->pipe =
gst_parse_launch ("webrtcbin bundle-policy=max-bundle name=sendrecv "
STUN_SERVER TURN_SERVER
if set GST_WEBRTC_ICE_TRANSPORT_POLICY_RELAY, android will not send any ice candidate.
sturn turn server is okay. Can not catch stun packet with wireshark.
g_signal_connect (webrtc->webrtcbin, "on-ice-candidate",
G_CALLBACK (send_ice_candidate_message), webrtc);
g_signal_connect (webrtc->webrtcbin, "notify::ice-gathering-state",
G_CALLBACK (on_ice_gathering_state_notify), NULL);
g_signal_connect (webrtc->webrtcbin, "notify::ice-connection-state",
G_CALLBACK (on_ice_gathering_state_notify), NULL);
gstreamer get signal notify:
static void
on_ice_gathering_state_notify (GstElement * webrtcbin, GParamSpec * pspec,
gpointer user_data)
{
GstWebRTCICEConnectionState ice_connect_state;
GstWebRTCICEGatheringState ice_gather_state;
gchar *stunser = NULL, *turnser = NULL;
const gchar *new_state = "unknown";
g_object_get (webrtcbin, "stun-server", &stunser, NULL);
if(stunser) {
gst_print("stun-server: %s\n", stunser);
g_free(stunser);
}
g_object_get (webrtcbin, "turn-server", &turnser, NULL);
if(turnser) {
gst_print("turn-server: %s\n", turnser);
g_free(turnser);
}
g_object_get (webrtcbin, "ice-gathering-state", &ice_gather_state, NULL);
g_object_get (webrtcbin, "ice-connection-state", &ice_connect_state, NULL);
switch (ice_gather_state) {
case GST_WEBRTC_ICE_GATHERING_STATE_NEW:
new_state = "new";
break;
case GST_WEBRTC_ICE_GATHERING_STATE_GATHERING:
new_state = "gathering";
break;
case GST_WEBRTC_ICE_GATHERING_STATE_COMPLETE:
new_state = "complete";
break;
}
gst_print ("ICE gathering state changed to %s, %d\n", new_state, ice_connect_state);
}
2022-11-23 11:35:50.239 1638-5461 GLib+stdout org.freedesktop.gstreamer.webrtc I stun-server: stun://47.104.15.123:3478
2022-11-23 11:35:50.239 1638-5461 GLib+stdout org.freedesktop.gstreamer.webrtc I turn-server: turn://jianxi:jianxi#47.104.15.123:3478
2022-11-23 11:35:50.239 1638-5461 GLib+stdout org.freedesktop.gstreamer.webrtc I ICE gathering state changed to complete, 4
ice end is 4 GST_WEBRTC_ICE_CONNECTION_STATE_FAILED.
In most cases, it means there are some issues with connecting to stun/turn server from your device. Either the host is unreachable via udp or there are issues with authentication.
The easiest way to test it is with trickle-ice page from your device's browser. Set address and creds of your stun/turn and check if there are some relay/srflx/prflx candidates.
If so, then it's a configuration issue on gstreamer-side.
If not, then try using tcpdump and inspect stun requests/responses. Maybe you're not getting the responses, or there are some error responses which might give you an idea of what's wrong.
I have a vue-cli app, trying to run it inside Laravel Homestead.
What I have:
My hosts on host machine:
127.0.0.1 localhost
127.0.1.1 PC
192.168.2.10 myvueapp.local
hosts inside VM:
127.0.0.1 localhost
127.0.0.1 myvueapp.local
127.0.1.1 homestead homestead
Vagrant version: 2.2.4, Homestead: v8.3.2, vue --version: 3.7.0
npm run serve executes without problems inside VM, but I get
We're sorry but myvueapp doesn't work properly without JavaScript
enabled. Please enable it to continue.
as a response body from request:
//response headers
Request URL: https://myvueapp.local/
Request Method: GET
Status Code: 200
Remote Address: 192.168.2.10:443
Referrer Policy: no-referrer-when-downgrade
Browser page is blank.
Also there is one favicon request:
Request URL: https://myvueapp.local/%3C%=%20BASE_URL%20%%3Efavicon.ico
Request Method: GET
Status Code: 400 Bad Request
Remote Address: 192.168.2.10:443
Somehow BASE_URL doesn't compile in index.html:
<link rel="icon" href="<%= BASE_URL %>favicon.ico">
My vue.config.js:
module.exports = {
devServer: {
host: 'myvueapp.local',
https: true
}
}
Homestead.yaml:
ip: "192.168.2.10"
#...
sites:
- map: myvueapp.local
to: /home/vagrant/path/to/myvueapp.local/public
#...
ports:
- send: 8080
to: 80
Port, where Vue is served (inside VM, 8080) is listening.
lsof -i :8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 3022 vagrant 22u IPv4 31440 0t0 TCP localhost:http-alt (LISTEN)
Nginx config:
server {
listen 80;
listen 443 ssl http2;
server_name .myvueapp.local;
root "/path/to/myvueapp.local/public";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
P. S. It runs ok when I'm serving it from my host machine.
What I've tried:
change host
module.exports = {
devServer: {
host: '0.0.0.0', //<-- here
https: true
}
}
, didn't helped.
Edit #1
I moved a bit further, this Nginx config now allowes me to access Vue app, served inside VM, using host machine:
location / {
try_files $uri $uri/ /index.html =404;
proxy_pass http://localhost:8080; #<-- this might be the output from npm run serve, without last slash
# App running at:
# - Local: http://localhost:8080/
# ^^^^^^^^^^^^^^^^^^^^^
}
But there is still a problem: hot-reload doesn't work.
I have been struggling to implement an RDP probe to check multiple ports in Windows machines using Prometheus Blackbox.
So far I manage to check DNS, ping, ports 80,8080 but I cannot manage to test 3389!
As a rule of thumb I would like to be able to ping/probe any ports that have services running on this hosts
My blackbox.yml is:
modules:
http_2xx:
prober: http
http:
http_get_2xx:
prober: http
http:
method: GET
http_post_2xx:
prober: http
timeout: 5s
http:
method: POST
headers:
Content-Type: application/json
body: '{}'
tcp_connect:
prober: tcp
pop3s_banner:
prober: tcp
tcp:
query_response:
- expect: "^+OK"
tls: true
tls_config:
insecure_skip_verify: false
ssh_banner:
prober: tcp
tcp:
query_response:
- expect: "^SSH-2.0-"
irc_banner:
prober: tcp
tcp:
query_response:
- send: "NICK prober"
- send: "USER prober prober prober :prober"
- expect: "PING :([^ ]+)"
send: "PONG ${1}"
- expect: "^:[^ ]+ 001"
icmp:
prober: icmp
dns_test:
prober: dns
timeout: 5s
dns:
query_name: google.com
preferred_ip_protocol: ip4
And my prometheus.yml 3389 port probe entry is:
- job_name: "rdp-dev-status"
metrics_path: /probe
params:
module: [dns_test]
static_configs:
- targets:
- nostradata-dvmh-prodweb-01
# file_sd_configs:
# - files:
# - /opt/prometheus/tools/targets/rdp-dev-targets.yml
relabel_configs:
# Ensure port is 22, pass as URL parameter
- source_labels: [__address__]
regex: (.*)(:.*)?
replacement: ${1}:3389
target_label: __param_target
# Make instance label the target
- source_labels: [__param_target]
target_label: instance
# Actually talk to the blackbox exporter though
- target_label: __address__
replacement: PROD-NIFI:9115
module: [dns_test]
Using a DNS probe is probably not going to work with RDP. Try the tcp_connect module.
Tested traefik in docker mode - everything goes fine. Now I need to use a "normal" backend, means forward requests from port 88 which is controlled by traefik to port 8080. But it does not work as expected.
curl -v -H Host:myhost 127.0.0.1:88 (not found, expected whoami answer)
$ curl -v -H Host:myhost 127.0.0.1:88
* Rebuilt URL to: 127.0.0.1:88/
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 88 (#0)
> GET / HTTP/1.1
> Host:myhost
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Fri, 12 Jan 2018 09:13:27 GMT
< Content-Length: 19
<
404 page not found
* Connection #0 to host 127.0.0.1 left intact
traefik is executed as ./traefik2 --logLevel=DEBUG --debug -c traefik.toml
backend is sudo docker service create -d --name whoami --constraint=node.role==manager --publish 8080:80 --replicas 1 emilevauge/whoami
Any idea?
traefik.toml
debug=true
logLevel = "DEBUG"
[traefikLog]
filePath = "tl.txt"
[accessLog]
filePath = "al.txt"
[entryPoints]
[entryPoints.http]
address = ":88"
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.backend1]
rule = "Host:myhost"
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
url = "http://127.0.0.1:8080"
curl 127.0.0.1:8080 (docker emilevauge/whoami, works as expected)
$ curl 127.0.0.1:8080
Hostname: 9134668598ed
IP: 127.0.0.1
IP: 10.255.0.7
IP: 10.255.0.8
IP: 172.18.0.3
GET / HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: curl/7.47.0
Accept: */*
$ cat al.txt
192.168.99.1 - - [12/Jan/2018:09:03:39 +0000] "GET / HTTP/1.1" - - - "curl/7.57.0" 1 - - 0ms
192.168.99.1 - - [12/Jan/2018:09:04:03 +0000] "GET / HTTP/1.1" - - - "curl/7.57.0" 2 - - 0ms
192.168.99.1 - - [12/Jan/2018:09:12:19 +0000] "GET / HTTP/1.1" - - - "curl/7.57.0" 3 - - 0ms
127.0.0.1 - - [12/Jan/2018:09:13:27 +0000] "GET / HTTP/1.1" - - - "curl/7.47.0" 4 - - 0ms
$ cat tl.txt
time="2018-01-12T09:03:35Z" level=info msg="Using TOML configuration file /home/cluster/traefik.toml
"
time="2018-01-12T09:03:35Z" level=info msg="Traefik version v1.5.0-rc4 built on 2018-01-04_02:28:22P
M"
time="2018-01-12T09:03:35Z" level=info msg="
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on: https://docs.traefik.io/basic/#collected-data
"
time="2018-01-12T09:03:35Z" level=debug msg="Global configuration loaded {"LifeCycle":{"RequestAccep
tGraceTimeout":0,"GraceTimeOut":0},"GraceTimeOut":0,"Debug":true,"CheckNewVersion":true,"SendAnonymo
usUsage":false,"AccessLogsFile":"","AccessLog":{"file":"al.txt","format":"common"},"TraefikLogsFile"
:"","TraefikLog":{"file":"tl.txt","format":"common"},"LogLevel":"DEBUG","EntryPoints":{"http":{"Netw
ork":"","Address":":88","TLS":null,"Redirect":null,"Auth":null,"WhitelistSourceRange":null,"Compress
":false,"ProxyProtocol":null,"ForwardedHeaders":{"Insecure":true,"TrustedIPs":null}}},"Cluster":null
,"Constraints":[],"ACME":null,"DefaultEntryPoints":["http"],"ProvidersThrottleDuration":2000000000,"
MaxIdleConnsPerHost":200,"IdleTimeout":0,"InsecureSkipVerify":false,"RootCAs":null,"Retry":null,"Hea
lthCheck":{"Interval":30000000000},"RespondingTimeouts":null,"ForwardingTimeouts":null,"Web":null,"D
ocker":null,"File":null,"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":n
ull,"Boltdb":null,"Kubernetes":null,"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":
null,"ServiceFabric":null,"Rest":null,"API":null,"Metrics":null,"Ping":null}"
time="2018-01-12T09:03:35Z" level=info msg="Preparing server http &{Network: Address::88 TLS:<nil> R
edirect:<nil> Auth:<nil> WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil> ForwardedHeaders
:0x1cb52950} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-01-12T09:03:35Z" level=info msg="Starting server on :88"
Kindly solved by Idez. Config must be like this ([file] section was missed):
defaultEntryPoints = ["http"]
debug=true
logLevel = "DEBUG"
[traefikLog]
filePath = "tl.txt"
[accessLog]
filePath = "al.txt"
[entryPoints]
[entryPoints.http]
address = ":88"
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
url = "http://127.0.0.1:8080"
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host:myhost"
i have setup varnish 4 to run on port 8081 while apache is configured to run on port 80.
The problem with my setup is that when i browse my domain
http//:mydomain.com:8180/.
i get a (301) permanent redirect to http//:mydomain.com/.
Due to this redirect am unable to get the difference between calling the cached domain http//:mydomain.com:8180/ visa v the uncached domain http//:mydomain.com/.
my varnish config
DAEMON_OPTS="-a :8180\
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,1G"
.......
also the vcl
backend mydomain {
.host = "x.x.x.x";
.port = "80";
.connect_timeout = 60s;
.first_byte_timeout = 60s;
.between_bytes_timeout = 60s;
.max_connections = 800;
}
.......
the responce header shows that apache is the one redirecting.
HTTP/1.1 301 Moved Permanently
Date: Fri, 04 Sep 2015 11:58:04 GMT
Server: Apache
X-Powered-By: PHP/5.3.3
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Pragma: no-cache
X-Pingback: http//:mydomain.com/xmlrpc.php
Location: http//:mydomain.com/
Vary: Accept-Encoding
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Varnish: 32795
Age: 0
Via: 1.1 varnish-v4
Connection: keep-alive
my question is How do i stop the redirect?
fixed this by adding
set req.http.host = "http//:mydomain.com";
in vlc as shown below.
if (req.http.host ~ "mydomain.com:8180") {
set req.http.host = "mydomain.com";
set req.backend_hint = mydomain;
}
by doing this we ensure that the request host is recognized by apache hence apache will not redirect