I would like to use Varnish for Symfony and for my eZ Platform CMS.I have followed this tutorial to set up my Varnish : http://book.varnish-software.com/4.0/chapters/Getting_Started.html#exercise-install-varnish
So I have the following working server :
Varnish listening on port 80
Varnish Uses backend on localhost:8080
Apache listening on localhost:8080
I have also setted up my eZ Platform ezplatform.yml and ezplatform.conf, to disable the default cache and enable varnish (I guess).
I added these two line to ezplatform.conf folling the documentation https://doc.ez.no/display/TECHDOC/Using+Varnish:
SetEnv USE_HTTP_CACHE 0
SetEnv TRUSTED_PROXIES "0.0.0.0"
I put 0.0.0.0 for Varnish server IP address because netstat -nlpt retreive me the following addresses for Varnish servers :
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11151/varnishd
tcp 0 0 127.0.0.1:1234 0.0.0.0:* LISTEN 11151/varnishd
So I guess this is the right value.
Then, I added the following lines to my ezplatform.yml (checked the documentation above) :
ezpublish:
http_cache:
purge_type: http
siteaccess:
list: [site]
groups:
http_cache:
purge_servers: 0.0.0.0:80
Varnish and httpd restarted well. Then, I checked if Varnish was used by local website, checking the HTTP headers, and I got this :
Via: "1.1 varnish-v4"
X-Varnish: "32808"
Which is, I guess, a good progress.
Anyaway, In the Symfony profiler, I still have the following intels :
Cache Information
Default Cache default
Available Drivers Apc, BlackHole, FileSystem, Memcache, Redis, SQLite
Total Requests 349
Total Hits 349
Cache Service: default
Drivers FileSystem
Calls 349
Hits 349
Doctrine Adapter false
Cache In-Memory true
Is it normal to still get this ? Shouldn't it be something like Default Cache : varnish instead of default ? How can I check if my Varnish is currently working on my site instead of the symfony default cache ?
Btw, here is my vcl file :
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples.
# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
import directors;
# Default backend definition. Set this to point to your content server.
backend default {
.host = "127.0.0.1";
.port = "8080";
}
sub vcl_init {
new backs = directors.hash();
backs.add_backend(default, 1);
}
sub vcl_recv {
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
set req.http.X-cookie = req.http.cookie;
if (!req.http.Cookie ~ "Logged-In") {
unset req.http.Cookie;
}
if (req.url ~ "\.(png|gif|jpg|png|css|js|html)$") {
unset req.http.Cookie;
}
}
sub vcl_backend_response {
# Happens after we have read the response headers from the backend.
#
# Here you clean the response headers, removing silly Set-Cookie headers
# and other mistakes your backend does.
}
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
}
Even if it is not finish, I don't get how the Symfony default cache is still working, since I have disabled it in the configuration file.
Thank you for your help.
if you see something like
Via: "1.1 varnish-v4"
X-Varnish: "32808"
your varnish is working. Why disable the symfony cache though? You can use both... Might or might not make sense. That pretty much depends on the program logic.
If you want to gain more insight into your varnish during developement you can add the following lines:
sub vcl_deliver {
if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
# show how ofthe the object created a hit so far (reset on miss)
set resp.http.X-Cache-Hits = obj.hits;
}
Related
I have multiple small flask apps. I would want to run each of the flask apps on different ports on the same server(single domain).
For example say i have 3 flask apps
tasks.py --> has API endpoints with /task only
users.py --> has API endpoints with /user only
analysis.py --> has API endpoints with /analysis only
domain name : api.test.com
I want to run tasks.py on port 8080 , users.py on port 5000 and analysis.py on say port 4500.
I want to configure my uWSGI and nginx so that when i hit the api.test.com/task/xxxx i want the request to be directed to port 8080 (where tasks.py is running),
similarly api.test.com/user/xxxx should be directed to port 5000 and api.test.com/analysis/xxxx to 4500
It seems to me that you could have one single uWSGI isntance for that with one port, but if you like this way of thinking, then you can follow the next way.
Suppose, you already have several uWSGI instances running on different ports: 8080, 5000 and 4500.
Then you need to create an Nginx config with approximately the following content (read the comments, please):
# webserver configuration
server {
# port to be listened by the web-server
listen 80;
# domain name with its aliases or the ip
server_name api.test.com;
# SETUP THREE LOCATION SECTIONS FOR EACH ACTION
location /task {
# AND SPECIFY THE NEEDED ADDRESS:PORT HERE
uwsgi_pass YOUR_SERVER_IP:8080;
# ONE MAY ALSO USE UNIX-SOCKET (IF ON THE SAME SERVER), INSTEAD OF A PORT
# uwsgi_pass unix:/path/to/the/socket.sock
include /etc/nginx/uwsgi_params; # stadard parms with environment variables required by a uwsgi app
}
location /user {
uwsgi_pass YOUR_SERVER_IP:5000;
include /etc/nginx/uwsgi_params;
}
location /analysis {
uwsgi_pass YOUR_SERVER_IP:4500;
include /etc/nginx/uwsgi_params;
}
}
I hope, you know, how to run three uWSGI instances for each of the ports.
In my opinion, this is extremely ugly, as in order to add any new action you will have to edit Nginx config again. So I don't recommend this solution and only suggest this answer as demonstration of Nginx's capabilities in connection with other web servers.
I have a Perl HTTPS application which runs behind an Elastic Load Balancer (ELB). The HTTPS is terminated by the ELB, which then forwards the requests as HTTP to my instances.
The issue is that, because the instance itself is accessed via HTTP, when I use the CGI module to generate self-referencing URL's, they incorrectly use HTTP instead of HTTPS, so form POSTs fail.
GET request are ok because they're redirected, but anything that uses POST doesn't work.
If I check my CGI environment variables, I have the following...
HTTP_X_FORWARDED_PORT = 443
HTTP_X_FORWARDED_PROTO = https
REQUEST_SCHEME = http
SERVER_PROTOCOL = HTTP/1.1
The CGI module is presumably using REQUEST_SCHEME or SERVER_PROTOCOL to determine that the URLs should use http://.
Is there some way I can fudge the environment variables at the Apache or NGINX level to convince CGI that the site is in fact HTTPS?
I found a relatively simple way to do it.
I just SetEnv HTTPS 'on' in the Apache config. The CGI module is now satisfied that it's HTTPS, so creates URLs accordingly.
I set the other environment variables to match just to be safe, but it just seems to be HTTPS that CGI uses.
This is the part of CGI.pm that is responsible for determining if HTTPS is being used:
sub https {
my ($self,$parameter) = self_or_CGI(#_);
if ( defined($parameter) ) {
$parameter =~ tr/-a-z/_A-Z/;
if ( $parameter =~ /^HTTPS(?:_|$)/ ) {
return $ENV{$parameter};
}
return $ENV{"HTTPS_$parameter"};
}
return wantarray
? grep { /^HTTPS(?:_|$)/ } keys %ENV
: $ENV{'HTTPS'};
}
protocol consults the return value of this method:
return 'https' if uc($self->https()) eq 'ON';
So, your solution of SetEnv HTTPS on would work.
However, if you wanted your program to respect HTTP_X_FORWARDED_PROTO, you could monkey patch CGI:
#!/usr/bin/env perl
use strict;
use warnings;
use CGI qw();
{
no warnings 'redefine';
my $original_https = \&CGI::https;
*CGI::https = sub {
goto $original_https unless my $proto = $ENV{HTTP_X_FORWARDED_PROTO};
return 'on' if lc($proto) eq 'https';
return 'off';
};
}
my $cgi = CGI->new;
print $cgi->self_url, "\n";
$ HTTP_X_FORWARDED_PROTO=https ./monkey.pl
https://localhost:80
$ ./monkey.pl
http://localhost
Of course, this points out that CGI now appends the port number, and because it does not pay attention to HTTP_X_FORWARDED_PORT, it likely gets that wrong as well. If I am not mistaken, for that you'd need to monkey patch virtual_port.
I am currently been involved in implementation of varnish with loadbalancer as back-end which shall forward traffic accordingly to multiple web server.
I am trying to achieve:
Public Traffic -> haproxy/DNS -> [Varnish (x2) / nginx(ssl) ] -> Loadbalancer -> Web server(x4)
I am able to configure Varnish , nginx as ssl/443 terminator for one domain.
(i.e if i point dns to varnish eth and access webserver serves the page)
varnish config
backend loadbalancer { .host = "xxx.xxx.xxx.xxx"; .port = "80" }
backend loadbalancer_ssl { .host = "xxx.xxx.xxx.xxx"; .port = "443"; }
sub vcl_recv {
# Set the director to cycle between web servers.
if (server.port == 443) {
set req.backend = loadbalancer_ssl;
}
else {
set req.backend = loadbalancer;
}
}
# And other vcl rules for security and other.
Nginx Config
location / {
# Pass the request on to Varnish.
proxy_pass http://127.0.0.1;
proxy_http_version 1.1;
#SSL certificate and config
=> How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to web server directly? (so that i don't have to worry about multiple interface for ssl support)
=> Or any standard approach to achieve with 443 terminator?
Note: why i am trying to achieve this: To create multiple layer for security and using existing hardware devices.
Already in place:
All server has (multiple interface for ssl using lightty).
Load balancer -> Hardware -> which will balance load between those web server.
Any experts sharing there view would be great.
I have decided to go with nginx as ssl terminator and achieving my question answers as below. I decided to update this, if anyone finds it useful.
From my above query:
How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> How it works is we need to have socket listening for https/ either nginx/pound/or anything else that can read ssl.
(which i was not quite convinced previously is to use this point as ssl terminator however i think i am fine as beyond that level now i have planned to make it internal zone.
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to webserver directly? (so that i dont have to worry about multiple interface for ssl support)
One can achieve this either with multiple interface (if you have multiple domains).
or, same interface if you are dealing with subdomains.
Or, you can create one secure page for all ssl required pages (depends upon trade-off)
=> Or any standard approach to achieve with 443 terminator?
I decided to go with nginx to use those feature that nginx provides (interms of security layer).
I have a dedicated server that host my own websites. I have installed varnish with the default VCL file. Now I want to tell varnish to do the following:
Cache only the following static file types (.js, .css, .jpg, .png, .gif, .jpg). Those are the server file types that are served, not URLs ended with the those extensions.
Do not cache files that are bigger than over 1M byte
Caching of any file should expire in 1 day (or whatever period).
Caching may happen only when Apache send 200 HTTP code.
Otherwise leave the request intact so it would be served by Apache or whatever backend.
What should I write in the VCL file to achieve those requirements ? Or what should I do ?
You can do all of this in the vcl_fetch subroutine. This should be considered pseudocode.
if (beresp.http.content-type ~ "text/javascript|text/css|image/.*") {
if (std.integer(beresp.http.Content-Length,0) < /* max size in bytes here */ ) {
if (beresp.status == 200) { /* backend returned 200 */
set obj.ttl = 86400; /* cache for one day */
return (deliver);
}
}
}
set obj.ttl = 120;
return (hit_for_pass); /* won't be cached */
What I did :
1- Isolate all static contents to another domain (i.e. domain to serve the dynamic pages is different from the domain the serve static contents.)
2- Assign another dedicated IP Address to the domain that serve static contents
3- Tell varnish to only listen to that IP (i.e. static contents IP) on port 80
4- Using Apache conf to control caching period to each static content type (varnish will just obey that headers)
cons:
1- Varnish will not even listen or process to the requests that it should leave intact. Those requests (for dynamic pages) are going directly to Apache since Apache listen to the original IP (performance).
2- No need to change default VCL default file (only if you want to debug) and that is helpful for those who don't know the principles of VCL language.
3- You are controlling everything from Apache conf.
Pros:
1- You have to buy a new dedicated IP if you don't have a spare one.
thanks
I keep getting this error in Apache's error log:
[client 127.0.0.1] Client sent malformed Host header
exactly every 5 minutes. This happens since we installed Varnish on our server, but I can't understand why and how to fix it. I even tried to set Apache's error_log verbosity to debug, but no other useful information is provided. Any idea?
Our Varnish configuration is very basic:
backend default {
.host = "127.0.0.1";
.port = "9001";
}
sub vcl_recv {
remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For = client.ip;
}
We have several virtualhost that run on port 9001.
Can anyone tell me more about this error and how to resolve or at least investigate about it?
Varnish performs a health check on your backends which might need to be configured more precisely for Apache to accept it. If this doesn't solve your problem, try logging the User-Agent header in Apache to find out who is make the malformed request.