I want to setup the following infratructure: Nginx as ssl terminator/ load balancer in front of a Varnish cache which itself uses Apache backends.
Now I want to load balance the traffic in the following way. Primary backend should be the Varnish backend. Backup backend should be the Apache backend directly. And another backup backend, that simply shows a maintenance page, if the Varnish backend and the Apache backend is down.
I have tested the proxy_pass upstream the following ways:
upstream backends {
server VARNISH-HOST;
server APACHE-HOST backup;
server MAINTENANCE-HOST backup;
}
With this setup, and Varnish offline, the system cycles throuhg both backends with round-robin - not first backup and, if offline too, second backup.
Another configuration:
upstream backends {
server VARNISH-HOST weight=10;
server APACHE-HOST;
server MAINTENANCE-HOST backup;
}
But I am not sure if this is the right way?! What is the best/ right way to setup the upstream backends in the way I want them to respond? Any advice?
Regards.
Related
I have an application that's setup like this:
HAPROXY -> VARNISH -> APACHE (MOD_EVENT) -> PHP_FPM (REDIS + MYSQL)
Haproxy for TLS termination, varnish for cache.
I've enabled http2 in haproxy, varnish and apache:
Haproxy: added alpn h2,http/1.1 to frontend, added proto h2 to backend
Varnish: Added this flag: -p feature=+http2
Apache: installed mod_http2 and added Protocols h2 h2c http/1.1.
What i'm understanding from documentation is that haproxy supports end-to-end http2, varnish only supports http2 on frontend.
So after varnish -> http2 request becomes http 1.1 and apache receives http1.1 requests as I've confirmed through the logs.
My question is:
Should I strive to have end-to-end http2? Is that a desirable thing to do for performance? or should I just not even enable http2 on backend haproxy connection since varnish won't pass it through anyways?
I've been thinking about it for some time.
In theory once HTTP2 connection reaches HAPROXY -> I think we probably wont benefit from HTTP2 multiplexing anymore as the rest of the request travels within datacenter... and network latencies are so much smaller within datacenter right? Just curious if anyone else ran into same question.
The main reason why we use HTTP/2 is to prevent head-of-line blocking. The multiplexing aspect of H2 helps reduce the blocking.
However, when Varnish communicates with the origin server, the goal is to cache the response and avoid sending more requests to the origin.
The fact that HTTP/1 is used between Varnish and the origin shouldn't be a big problem, because Varnish is the only client that is served there. The head-of-line blocking will hardly ever occur.
I have implemented varnish for my web pages.
Now, I have one doubt can i call to Apache server before serving cached content to user
in this case, request flow will be varnish -> nginx ->apache
OR
can i serve varnish from apache level. in this case, request flow will be
nginx -> apache ->varnish
You can set it up any way you want. All depends on your particular requirements.
Typically you would want:
Nginx (for SSL) -> Varnish (caching) -> Apache (for .htaccess)
I currently have a server running apache httpd 2.2 serving ~1000 websocket connections. I'm trying to scale this up to around ~10K websockets on the the same hardware. I thought I'd be able to place an nginx reverse proxy on the front end, and that nginx would only connect to the backend when there was incoming traffic, and would maintain the connection to the outside world. However, right now the connection seems to continuous (i.e., once the websocket upgrade is complete, a httpd process is tied up until the connection is broken. Am I misunderstanding how nginx should do websockets proxying, or do I have something misconfigured?
NGINX supports WebSockets by creating a tunnel between the client and the backend server and so the nginx will not terminate the connections to the backend/frontend until the client/server terminates the connections.
See: https://www.nginx.com/blog/websocket-nginx/ for more info.
It setup wordpress on apache server and config it runs smooth.
Now i want to setup a nginx proxy to server static files.
So i have some questions:
Do i need to duplicate uploads folder of wordpress and put in nginx server ?
Or try to cache all static file in nginx server ?
On apache server i use module deflate, expires, pagespeed, opcache. So if i add nginx proxy to server static files, should i remove the deflate, expires, pagespee module ? Because we can do this work on nginx server.
In case of using Nginx, the Apache HTTPD sever is good but redundant. Nginx can communicate to PHP-FPM directly which is the most efficient solution so far, with that option you can do:
improve performance
simplify deployment procedure
setup gzip and other headers in one place
serve static content more efficiently
reduce amount of overall memory
utilise Nginx cache (with Wordpress plugin to invalidate its cache on page content update)
I am looking for load balancer for my web application that will support master-slave kind of configuration or algorithm support.
For now I am using apache proxy but with round robin LB method.
I am not sure if apache load balancer has master-slave support or any module?
Here is what I want exactly: Forward all request to one back end server and once the master server is down the slave or other server will act as hot stub.
Please suggest if any open source load balancer I can use w.r.t to my above requirement.
You can use nginx with its Upstream module.
Example configuration:
upstream myBackend {
server main.example.com:8080;
server back.example.com:8080 backup;
}
server {
location / {
proxy_pass http://myBackend;
}
}
While the first server (main.example.com) is up, nginx will use it. When it comes down, it will use the second server. You can read in the linked manual page for various other tuning parameters (for example when to mark server as failed). Nginx supports HTTPS for both incoming connections and also for connections to the proxy backend.
EDIT: For Apache it seems to be possible in version 2.4 using the Proxy Balancer. I have not tested this config. For more details see manual for ProxyPass.
ProxyPass "/" "balancer://hotcluster/"
<Proxy "balancer://hotcluster">
BalancerMember "http://1.2.3.4:8000"
# The server below is on hot standby
BalancerMember "http://1.2.3.6:8000" status=+H
</Proxy>