I have this v2.8-2 configuration file that I want to convert to the new YAML format. Documentation is hard to find.
User "root"
Group "root"
LogLevel 5
Alive 10
Control "/var/run/poundctl.socket"
# Redirect all http requests on port 80 to https on port 443
ListenHTTP
Address 0.0.0.0
Port 80
Err500 "/usr/local/etc/pound_error_500"
Err503 "/usr/local/etc/pound_error_500"
Service
Redirect 301 "https://localhost"
End
End
# Redirect all requests on port 443 to the webapp on port 9443
ListenHTTPS
Address 0.0.0.0
Port 443
Err500 "/usr/local/etc/pound_error_500"
Err503 "/usr/local/etc/pound_error_500"
Cert "/etc/pound/certbot/combined-for-pound.pem"
Disable SSLv3
Ciphers "EECDH+ECDSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:ECDH+AESGCM:ECDH+AES256:ECDH+AES128:ECDH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!eNULL:!LOW:!aNULL:!MD5:!DSS"
SSLAllowClientRenegotiation 0
SSLHonorCipherOrder 1
HeadRemove "X-Forwarded-Proto"
HeadRemove "x-forwarded-proto"
AddHeader "x-forwarded-proto: https"
Service
BackEnd
Address 127.0.0.1
Port 9000
End
End
End
This is my attempt at translating the above to the new format:
Global:
- Err500: /usr/local/etc/pound_error_500
- Group: root
- User: root
Backends:
- &be
Address: 127.0.0.1
Port: 9000
HTTPListeners:
- Address: 0.0.0.0
Port: 80
Services:
- Backends:
- *be
HTTPSListeners:
- Address: 0.0.0.0
Certificates:
- /etc/pound/certbot/combined-for-pound.pem
Port: 443
Services:
- Backends:
- *be
This is the output from running Pound from the command line:
$ sudo pound -d 5
debug option 5 ./src/config.c:642
start get_others ./src/config.c:574
start get_backends ./src/config.c:123
addr 127.0.0.1 ./src/config.c:139
port 9000 ./src/config.c:142
push ./src/config.c:168
start get_https ./src/config.c:499
address 0.0.0.0 ./src/config.c:520
start get_certificates ./src/config.c:461
start get_one(/etc/pound/certbot/combined-for-pound.pem) ./src/config.c:376
get_one add pattern scalacourses\.com ./src/config.c:406
get_one add pattern home\.scalacourses\.com ./src/config.c:432
get_one add pattern scalacourses\.com ./src/config.c:432
get_one add pattern www\.scalacourses\.com ./src/config.c:432
get_one: added 4 patterns ./src/config.c:446
port 443 ./src/config.c:523
start get_services ./src/config.c:209
push ./src/config.c:258
push ./src/config.c:562
Prepare backends ./src/pound.c:153
Starting resurrector thread ./src/util.c:80
Prepare listeners ./src/pound.c:185
Prepare services for listener 0 ./src/pound.c:188
7F1B0827A640 start service ./src/http.c:45
7F1B0827A640 Null session: ./src/http.c:52
7F1B07A79640 thr_http start ./src/http.c:535
7F1B07A79640 start loop ./src/http.c:539
7F1B07278640 thr_http start ./src/http.c:535
7F1B07278640 start loop ./src/http.c:539
7F1B06A77640 thr_http start ./src/http.c:535
7F1B06A77640 start loop ./src/http.c:539
7F1B06276640 thr_http start ./src/http.c:535
7F1B06276640 start loop ./src/http.c:539
7F1B05A75640 thr_http start ./src/http.c:535
7F1B05A75640 start loop ./src/http.c:539
7F1B05274640 thr_http start ./src/http.c:535
7F1B05274640 start loop ./src/http.c:539
7F1B04A73640 thr_http start ./src/http.c:535
7F1B04A73640 start loop ./src/http.c:539
7F1B04272640 thr_http start ./src/http.c:535
7F1B04272640 start loop ./src/http.c:539
7F1B04272640 peer address 192.168.1.1 ./src/http.c:549
7F1B04272640 start sni ./src/util.c:157
7F1B04272640 sni for scalacourses.com ./src/util.c:165
7F1B04272640: found match at 0 ./src/util.c:169
Segmentation fault
The segmentation fault happens after I attempt to connect to the website from another machine by using https://scalacourses.com/
$ lynx https://scalacourses.com/
Looking up scalacourses.com
Making HTTPS connection to scalacourses.com
Alert!: Unable to make secure connection to remote host.
lynx: Can't access startfile https://scalacourses.com/
After the core dump, the socket remains in use for a few minutes before Pound can be restarted:
Listener 0.0.0.0:https: can't bind socket
When I try to access without SSL (http://scalacourses.com/ and http://www.scalacourses.com/) Pound does not appear to respond, and I get:
$ lynx http://scalacourses.com/
Looking up scalacourses.com
Making HTTP connection to scalacourses.com
Sending HTTP request.
HTTP request sent; waiting for response.
HTTP/1.1 302 Found
Data transfer complete
HTTP/1.1 302 Found
Using http://www.scalacourses.com/
Looking up www.scalacourses.com
Making HTTP connection to www.scalacourses.com
Alert!: Unable to connect to remote host.
lynx: Can't access startfile http://scalacourses.com/
Here is the help message for pound v3:
POUND(8) System Manager's Manual POUND(8)
NAME
pound - HTTP/HTTPS reverse-proxy and load-balancer
SYNOPSIS
pound [-v] [-c] [-d level] [-f config_file] [-p pid_file]
DESCRIPTION
Pound is a reverse-proxy load balancing server. It accepts requests
from HTTP/HTTPS clients and distributes them to one or more Web
servers. The HTTPS requests are decrypted and passed to the back-ends
as plain HTTP.
If more than one back-end server is defined, Pound chooses one of them
randomly. By default, Pound keeps track of associations between clients
and back-end servers (sessions).
GENERAL PRINCIPLES
In general Pound needs three types of objects defined in order to func‐
tion: listeners, services and back-ends.
Listeners
A listener is a definition of how Pound receives requests from
the clients (browsers). Two types of listeners may be defined:
regular HTTP listeners and HTTPS (HTTP over SSL/TLS) listeners.
At the very least a listener must define the address and port to
listen on, with additional requirements for HTTPS listeners.
Services
A service is the definition of how the requests are answered.
When a request is received Pound attempts to match them to each
service in turn. The services may define their own conditions as
to which requests they can answer: typically this involves cer‐
tain URLs (images only, or a certain path) or specific headers
(such as the Host header).
Back-ends
The back-ends are the actual servers for the content requested.
By itself, Pound supplies no responses - all contents must be
received from a "real" web server. The back-end defines how the
server should be contacted.
Multiple back-ends may be used within a service, in which case
Pound will load-balance between the available back-ends.
If a back-end fails to respond it will be considered "dead", in
which case Pound will stop sending requests to it. Dead back-
ends are periodically checked for availability, and once they
respond again they are "resurected" and requests are sent again
their way. If no back-ends are available (none were defined, or
all are "dead") then Pound will reply with "503 Service Unavail‐
able", without checking additional services.
The connection between Pound and the back-ends is always via
HTTP, regardless of the actual protocol used between Pound and
the client.
OPTIONS
Options available (see also below for configuration file options):
-v Print version: Pound will exit immediately after printing the
current version.
-c Check only: Pound will exit immediately after parsing the con‐
figuration file. This may be used for running a quick syntax
check before actually activating a server.
-d level
Debug mode: if level is greater than 0 error messages will be
sent to stdout and Pound will stay in the foreground. Level 0
(default) are the regular log messages, level 1 and up will pro‐
duce more detailed information.
-f config_file
Location of the configuration file (see below for a full de‐
scription of the format). Default: /etc/pound/pound.yaml
-p pid_file
Location of the pid file. Pound will write its own pid into
this file. Normally this is used for shell scripts that control
starting and stopping of the daemon. Default:
/var/run/pound.pid
One (or more) copies of Pound should be started at boot time. Use "big
iron" if you expect heavy loads: while Pound is as light-weight as we
know how to make it, with a lot of simultaneous requests it will use
quite a bit of CPU and memory. Multiple CPUs are your friend.
CONFIGURATION FILE
The configuration file is in standard YAML syntax. There are four
blocks of directives: Global directives (they affect the settings for
the entire program instance), Backends directives, defining the avail‐
able backends, HTTPlisteners directives (they define which requests
Pound will listen for), and HTTPSlisteners directives (same as HTTPlis‐
tener but via TLS).
Global Directives
User: user_name
Specify the user Pound will run as (must be defined in
/etc/passwd).
Group: group_name
Specify the group Pound will run as (must be defined in
/etc/group).
RootJail: directory_path_and_name
Specify the directory that Pound will chroot to at runtime.
Please note that SSL may require access to /dev/urandom, so make
sure you create a device by that name, accessible from the root
jail directory. Pound may also require access to /dev/syslog or
similar.
Err404: path_to_file
Specify a path to an HTML file to be returned in case of a 404
error.
Err405: path_to_file
Specify a path to an HTML file to be returned in case of a 405
error.
Err500: path_to_file
Specify a path to an HTML file to be returned in case of a 500
error.
Backends
A back-end is a definition of a single back-end server Pound will use
to reply to incoming requests. Each backend must be marked with an an‐
chor. The following directives are available:
Address: address
The address that Pound will connect to. This can be a numeric IP
address, or a symbolic host name that must be resolvable at run-
time. This is a mandatory parameter.
Port: port
The port number that Pound will connect to. This is a mandatory
parameter.
Timeout: number
How long to wait for a backend (server) to complete and opera‐
tion. Default: 15 seconds.
Threads: number
How many threads will be used to service requests to this back‐
end. See also below for remarks on performance tuning. Default:
8 threads.
HeadAdd: header
A header to add to each reply received from this backend. The
header is a string.
HTTPListeners
An HTTP listener defines an address and port that Pound will listen on
for HTTP requests. The following directives are available:
Address: address
The address that Pound will listen on. This can be a numeric IP
address, or a symbolic host name that must be resolvable at run-
time. This is a mandatory parameter. The address 0.0.0.0 may be
used as an alias for 'all available addresses on this machine',
but this practice is strongly discouraged.
Port: port
The port number that Pound will listen on. This is a mandatory
parameter.
Client: value
Define how long Pound will wait for client activity. Default: 5
seconds.
Threads: value
Define how many threads Pound will use to service client re‐
quests. Default: 8 threads.
Services:
This defines a service. This service will be used only by this
listener.
Services
The following directives are allowed in a service definition:
URL: pattern
The service will only be used if the request URL matches the
given pattern.
HeadRequire: pattern
Use the service only if any of the request headers matches the
given pattern.
HeadDeny: pattern
Use the service only if none of the request headers matches the
given pattern.
Session: number
How long to keep the client sessions (in seconds). Sessions are
a long term association between a client IP address and a spe‐
cific backend in this service. A value of 0 seconds means no
sessions are kept. Default: 0.
BackEnds:
A list of references to previously defined backends.
HTTPSListeners
All HTTPListeners directives are also available in the HTTPSListener
blocks.
The following additional directives are available:
Certificates:
A file name or a list of file names. Each file must contain a
certificate, optionally additional chained certificates up to a
known certificate authority, and the private key corresponding
to the certificate. Note: the private key should probably not
be password-protected, as Pound normally starts as a daemon and
cannot ask for the password at start-up time.
Ciphers:
A list of acceptable cipher names for this listener. The negoti‐
ation with the client will result in one of these ciphers being
used, or the hand-shake will fail.
ADDITIONAL REMARKS
High-availability
Pound attempts to keep track of active back-end servers, and will tem‐
porarily disable servers that do not respond (though not necessarily
dead: an overloaded server that Pound cannot establish a connection to
will be considered dead). However, every 60 seconds (compile-time op‐
tion), an attempt is made to connect to the dead servers in case they
have become active again. If this attempt succeeds, connections will be
initiated to them again.
The clients that happen upon a dead backend server will just receive a
503 Service Unavailable message.
Security
In general, Pound does not read or write to the hard-disk. The excep‐
tions are reading the configuration file and (possibly) the server cer‐
tificate file(s) and error message(s), which are opened read-only on
startup, read, and closed; secondly the pid file which is opened on
start-up, written to and immediately closed. Following this there is
no disk access whatsoever, so using a RootJail directive is only for
extra security bonus points.
Pound tries to sanitise all HTTP/HTTPS requests: the request itself,
the headers and the contents are checked for conformance to the RFC's
and only valid requests are passed to the back-end servers. This is not
absolutely fool-proof - as the recent Apache problem with chunked
transfers demonstrated. However, given the current standards, this is
the best that can be done - HTTP is an inherently weak protocol.
Additional Notes
Pound uses the system log for messages (default facility LOG_DAEMON -
compile-time option). The format is very similar to other web servers,
so if you want to use a log tool:
fgrep pound /var/log/messages | cut -d ':' -f 4- | your_log_tool
(assuming messages is you log file; it may be syslog or something else,
depending on your configuration).
Pound deals with (and sanitizes) HTTP/1.1 requests. Thus a single con‐
nection to an HTTP/1.1 client is kept, while the connection to the
back-end server is (re)opened as necessary.
Unless you start Pound as root it won't be able to listen on privileged
ports. That applies even if you do start it as root but set the User to
something else.
There is no point in setting User to root: either you start as root, so
you already are, or you are not allowed to setuid(0).
Performance Tuning Considerations
The two important factors in tuning the performance are the number of
threads for the backends end the number of threads for the listeners.
The number of backend threads defines how many requests may be issued
in parallel to a specific backend server, but also backend priorities.
Increasing it may overload the web server, but setting it too low will
cause longer wating ques for servicing requests. Please note that you
may define several backends for the same server in order to use them in
separate services.
The number of listener threads defines how many client requests can be
serviced in parallel. If this number is too low for your load clients
may be faced with long waiting times even when the backends are almost
idle.
EXAMPLES
The simplest configuration, with Pound used strictly to sanitise re‐
quests:
Backends:
- &be
Address: 10.1.1.100
Port: 80
HTTPListeners:
- Address: 123.1.2.3
Port: 80
Services:
- Backends:
- *be
HTTPSListeners:
The same thing, but with HTTPS:
Backends:
- &be
Address: 10.1.1.100
Port: 80
HTTPListeners:
HTTPSListeners:
- Address: 123.1.2.3
Port: 443
Services:
- Backends:
- *be
Certificates: "cert.pem"
Client: 60
Ciphers:
- TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384
- TLS-DHE-RSA-WITH-3DES-EDE-CBC-SHA
- TLS-DHE-RSA-WITH-AES-128-CBC-SHA
- TLS-RSA-WITH-CAMELLIA-128-CBC-SHA
- TLS-RSA-WITH-AES-128-CCM
- TLS-RSA-WITH-AES-256-GCM-SHA384
- TLS-RSA-WITH-RC4-128-MD5
- TLS-RSA-WITH-3DES-EDE-CBC-SHA
To distribute the HTTP/HTTPS requests to three Web servers, where the
third one is a newer and faster machine:
Backends:
- &be0
Address: 10.1.1.100
Port: 80
Threads: 8
- &be1
Address: 10.1.1.101
Port: 80
Threads: 8
- &be2
Address: 10.1.1.102
Port: 80
Threads: 12
HTTPListeners:
HTTPSListeners:
- Address: 123.1.2.3
Port: 80
Threads: 32
Services:
- Backends:
- *be0
- *be1
- *be2
Certificates:
- "cert1.pem"
- "cert2.pem"
To separate between image requests and other Web content:
Backends:
- &text
Address: 10.1.1.100
Port: 80
Threads: 16
- &images
Address: 10.1.1.101
Port: 80
Threads: 16
HTTPListeners:
- Address: 123.1.2.3
Port: 80
Threads: 32
Services:
- URL: ".*.(gif|jpg|png)"
Backends:
- *images
- Session: 300
Backends:
- *text
HTTPSListeners:
FILES
/var/run/pound.pid
this is where Pound will attempt to record its process id.
/etc/pound/pound.yaml
the default configuration file (compile-time option).
AUTHOR
Written by Robert Segall, Apsis GmbH.
REPORTING BUGS
Report bugs to <roseg#apsis.ch>.
COPYRIGHT
Copyright © 2002-2020 Apsis GmbH.
This is free software; see the source for copying conditions. There is
NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE.
pound Jan 2010 POUND(8)
Possible issue -- YAML is extremely white-space-sensitive.
The documentation says this:
Backends:
- &be
Address: 10.1.1.100
Port: 80
but you have this:
Backends:
- &be
Address: 127.0.0.1
Port: 9000
This may not be the answer you're looking for, but my suggestion is to stay on pound v2 and put it to rest when the time comes.
Version 3 was a complete rewrite of the codebase. It features some critical bugs (e.g. pound3 calls setuid before bind, which breaks setups that utilize unprivileged users. Reported it back in May with no response.) and new config format which isn't feature complete => there is no way to port some configs, as the functionality simply isn't there.
Did a bit of digging on a fresh install and ended up with the same conclusion as you in another comment. I was very fond of the project because of how simple it was and had lots of fun with it over the years, but I think the project is dead and it's time to move on unless someone is willing to maintain a fork.
Pound is effectively a dead project. Anything that Pound can do as a reverse proxy, Apache http and nginx can do, and do better.
I chose nginx. You can read about the gory details here: https://www.mslinn.com/blog/2022/07/08/reverse-proxy.html
Related
I currently have an HTTPS Load Balancer setup operating with a 443 Frontend, Backend and Health Check that serves a single host nginx instance.
When navigating directly to the host via browser the page loads correctly with valid SSL certs.
When trying to access the site through the load balancer IP, I receive a 502 - Server error message. I check the Google logs and I notice "failed_to_pick_backend" errors at the load balancer. I also notice that it failing health checks.
Some digging around leads me to these two links: https://cloudplatform.googleblog.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html
https://github.com/coreos/bugs/issues/1195
Issue #1 - Not sure if google-address-manager is running on the server
(RHEL 7). I do not see an entry for the HTTPS load balancer IP in the
routes. The Google SDK is installed. This is a Google-provided image
and if I update the IP address in the console, it also gets updated on
the host. How do I check if google-address-manager is running on
RHEL7?
[root#server]# ip route ls table local type local scope host
10.212.2.40 dev eth0 proto kernel src 10.212.2.40
127.0.0.0/8 dev lo proto kernel src 127.0.0.1
127.0.0.1 dev lo proto kernel src 127.0.0.1
Output of all google services
[root#server]# systemctl list-unit-files
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-ip-forwarding-daemon.service enabled
google-network-setup.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
Issue #2: Not receiving a 200 OK response. The certificate is valid
and the same on both the LB and server. When running curl against the
app server I receive this response.
root#server.com curl -I https://app-server.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Thoughts?
You should add firewall rules for the health check service -
https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules and make sure that your backend service listens on the load balancer ip (easiest is bind to 0.0.0.0) - this is definitely true for an internal load balancer, not sure about HTTPS with an external ip.
A couple of updates and lessons learned:
I have found out that "google-address-manager" is now deprecated and replaced by "google-ip-forward-daemon" which is running.
[root#server ~]# sudo service google-ip-forwarding-daemon status
Redirecting to /bin/systemctl status google-ip-forwarding-daemon.service
google-ip-forwarding-daemon.service - Google Compute Engine IP Forwarding Daemon
Loaded: loaded (/usr/lib/systemd/system/google-ip-forwarding-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-22 20:45:27 UTC; 17h ago
Main PID: 1150 (google_ip_forwa)
CGroup: /system.slice/google-ip-forwarding-daemon.service
└─1150 /usr/bin/python /usr/bin/google_ip_forwarding_daemon
There is an active firewall rule allowing IP ranges 130.211.0.0/22 and 35.191.0.0/16 for port 443. The target is also properly set.
Finally, the health check is currently using the default "/" path. The developers have put an authentication in front of the site during the development process. If I bypassed the SSL cert error, I received a 401 unauthorized when running curl. This was the root cause of the issue we were experiencing. To remedy, we modified nginx basic authentication configuration to disable authentication to a new route (eg. /health)
Once nginx configuration was updated and the path was updated to the new /health route at the health check, we were receivied valid 200 responses. This allowed the health check to return healthy instances and allowed the LB to pass through traffic
The default installation instructions show how to set up a server on port 80 using HTTP and WS (i.e. unencrypted).
The agent installation shows that TLS enabled servers are possible (I'l link here, but I'm not allowed).
The server configuration options show that DRONE_SERVER_CERT and DRONE_SERVER_KEY are available http://readme.drone.io/0.5/install/server-configuration/
Are there any fuller instructions to set this up? e.g. have port 80 forward to port 443 and have all agents talking to the server over encrypted channels.
If you were using certificates with drone 0.4 it will be the same configuration, although the names perhaps changed slightly. You will need to pass the following variables to your container:
DRONE_SERVER_CERT=/path/to/drone.cert
DRONE_SERVER_KEY=/path/to/drone.key
These certificates will exist on your host machine, which means their paths need to be mounted into your drone server:
--volume=/path/to/drone.cert:/path/to/drone.cert
--volume=/path/to/drone.key:/path/to/drone.key
You can also instruct Docker to expose 443 and forward to drone's default port 8000
-p 443:8000
When you configure the agent, you will of course need to update the configuration to use wss. You can read more in the agent docs, but essentially something like this:
DRONE_SERVER=wss://drone.server.com/ws/broker
And finally, if you get cert errors I recommend including the cert chain in your bundle. Bottom line, drone does not parse certs. Drone uses http.ListenAndServeTLS(cert, key). So any cert issues are coming from the standard library directly, and questions should therefore be directed to the Go support channels.
I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.
It's not that much of a question, rather a confirmation that what I did is right or not and if it is safe or not.
Until now what I have found googling around is that you cannot run rtorrent through a proxy. You can either put the http request through a proxy, or tsocks, in both cases either the actual transfers are done directly or not done at all. Therefore until now the only proposed viable solution is a VPN which I wanted to avoid.
What I did was use an http proxy for the http part and a port forwarding for the actual download part. For example, lets assume the following:
192.168.1.10 --> Local machine with the actual rtorrent
remote.machine.com --> The remote machine used as a proxy
Procedure:
I created 2 ssh tunnels
ssh -N -D 9090 user#remote.machine.com
ssh -R 9091:localhost:9091 user#remote.machine.com
From the local machine I installed polipo as the html proxy and configured it to use a socks proxy in the remote.machine.com.
I edited the following lines in /etc/polipo/config so that I can get the socks proxy.
socksParentProxy = "localhost:9090"
socksProxyType = socks5
I also changed the html proxy port for extra security, again in /etc/polipo/config
proxyPort = 9080
On the local machine I changed the ~/.rtorrent.rc as following:
#Proxy of the http requests through polipo
http_proxy=localhost:9080
# The ip address reported to the tracker.
#Really important, in order to get connections for downloads
ip = remote.machine.com
# The ip address the listening socket and outgoing connections is
# bound to.
bind = 192.168.1.10
# Port range to use for listening.
port_range = 9091-9091
# Start opening ports at a random position within the port range.
port_random = no
The system seems to work. I connect to the trackers and I have up and down traffic. So the questions are:
Am I safe that all the traffic concerning rtorrent is done through the remote.machine.com?
Did I miss something?
Are there any problems or concerns regarding this method?
As far as I see, you have covered inbound connections, as well as outgoing HTTP traffic, but any outbound peer-to-peer connections will be created directly, not through any tunnel. Currently, rtorrent does not appear to support passing outbound P2P connections through a tunnel or proxy of any kind, so in order to handle these, you'll need some other mechanism.
You mentioned tsocks and that it does not work – not even in addition to the rtorrent configuration you have set up above? (Although with tsocks you should be able to drop the HTTP proxy part.)
If that fails, there are alternatives to tsocks mentioned on the tsocks project page. A slightly more involved alternative would be to create a new loopback interface (lo:1 with IP 127.0.0.2), bind your rtorrent to that one and use something like sshuttle to direct all traffic originating on that interface through an SSH tunnel. Unfortunately, sshuttle doesn't let you restrict its operation to a specific interface at the moment, though, so you'd have to fiddle with the iptables rules it creates to make them match your needs. I assume a patch adding this feature to sshuttle would be welcome.
As a side note, you can create multiple port forwards and SOCKS proxies in a single SSH connection, like this:
ssh -N -D 9090 -R 0.0.0.0:9091:localhost:9091 myself#my.example.com
I'm trying to setup JMeter in a distributed mode.
I have a server running on an ec2 intance, and I want the master to run on my local computer.
I had to jump through some hopes to get RMI working correctly on the server but was solved with setting the "java.rmi.server.hostname" to the IP of the ec2 instance.
The next (and hopefully last) problem is the server communicating back to the master.
The problem is that because I am doing this from an internal network, the master is sending its local/internal ip address (192.168.1.XXX) when it should be sending back the IP of my external connection (92.XXX.XXX.XXX).
I can see this in the jmeter-server.log:
ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: 192.168.1.50; nested exception is:
That host IP is wrong. It should be the 92.XXX.XXX.XX address. I assume this is because in the master logs I see the following:
2012/07/29 20:45:25 INFO - jmeter.JMeter: IP: 192.168.1.50 Name: XXXXXX.local FullName: 192.168.1.50
And this IP is sent to the server during RMI setup.
So I think I have two options:
Tell the master to send the external IP
Tell the server to connect on the external IP of the master.
But I can't see where to set these commands.
Any help would be useful.
For the benefit of future readers, don't take no for an answer. It is possible! Plus you can keep your firewall in place.
In this case, I did everything over port 4000.
How to connect a JMeter client and server for distributed testing with Amazon EC2 instance and local dev machine across different networks.
Setup:
JMeter 2.13 Client: local dev computer (different network)
JMeter 2.13 Server: Amazon EC2 instance
I configured distributed client / server JMeter connectivity as follows:
1. Added a port forwarding rule on my firewall/router:
Port: 4000
Destination: JMeter client private IP address on the LAN.
2. Configured the "Security Group" settings on the EC2 instance:
Type: Allow: Inbound
Port: 4000
Source: JMeter client public IP address (my dev computer/network public IP)
Update: If you already have SSH connectivity, you could use an SSH tunnel for the connection, that will avoid needing to add the firewall rules.
$ ssh -i ~/.ssh/54-179-XXX-XXX.pem ServerAliveInterval=60 -R 4000:localhost:4000 jmeter#54.179.XXX.XXX
3. Configured client $JMETER_HOME/bin/jmeter.properties file RMI section:
note only the non-default values that I changed are included here:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# Remote Hosts - comma delimited
# Add EC2 JMeter server public IP address:Port combo
remote_hosts=127.0.0.1,54.179.XXX.XXX:4000
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To change the default port (1099) used to access the server:
server.rmi.port=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
4. Configured remote server $JMETER_HOME/bin/jmeter.properties file RMI section as follows:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
5. Started the JMeter server/slave with:
jmeter-server -Djava.rmi.server.hostname=54.179.XXX.XXX
where 54.179.XXX.XXX is the public IP address of the EC2 server
6. Started the JMeter client/master with:
jmeter -Djava.rmi.server.hostname=121.73.XXX.XXX
where 121.73.XXX.XXX is the public IP address of my client computer.
7. Ran a JMeter test suite.
JMeter GUI log output
Success!
I had a similar problem: the JMeter server tried to connect to the wrong address for sending the results of the test (it tried to connect to localhost).
I solved this by setting the following parameter when starting the JMeter master:
-Djava.rmi.server.hostname=xx.xx.xx.xx
It looks as though this wont work Distributed JMeter Testing explains the requirements for load testing in a distributed environment. Number 2 and 3 are particular to your use case I believe.
The firewalls on the systems are turned off.
All the clients are on the same subnet.
The server is in the same subnet, if 192.x.x.x or 10.x.x.x ip addresses are used.
Make sure JMeter can access the server.
Make sure you use the same version of JMeter on all the systems. Mixing versions may not work correctly.
Might be very late in the game but still. Im running this with jmeter 5.3.
So to get it work by setting up the slaves in aws and the controller on your local machine.
Make sure your slave has the proper localports and hostname. The hostname on the slave should be the ec2 instance public dns.
Make sure AWS has proper security policies.
For the controller (which is your local machine) make sure you run with the parameter '-Djava.rmi.server.hostname='. You can get the ip by googling "my public ip address". Definately not those 192.xxx.xxx.x or 172.xx.xxx.
Then you have to configure your modem to port forward your machine that is used to be your controller. The port can be obtained when from the slave log (the ones that has the FINE: RMI RenewClean....., yeah you have to set the log to verbose). OR set DMZ and put your controller machine. Dangerous, but convinient just for the testing time, don't forget to off it after that
Then it should work.