How to fix: Varnish not enabling on Apache - apache

New server setup, but testing Varnish, after installation, does not show the expected result. It does not look like Varnish is configured correctly for Apache.
The server is CentOS 7, running Apache 2.4, Redis, RabbitMQ and Varnish 5.2.
I have followed the instructions to change Varnish listening port to 80, and changed backend defaults to .port ="8080" in /etc/varnish/default.vcl and VARNISH_LISTEN_PORT=80 in /etc/varnish/varnish.params
backend default {
.host = "164.160.89.188";
.port = "8080";
}
when I restart Varnish and Apache and run the command
curl -I http://localhost
I get the following results:
HTTP/1.1 200 OK
Date: Wed, 12 Jun 2019 12:45:59 GMT
Last-Modified: Wed, 30 Jan 2019 02:03:25 GMT
Content-Type: text/html
Vary: Accept-Encoding
Pragma: no-cache
Expires: -1
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
Accept-Ranges: bytes
Connection: keep-alive
I should be getting something like this
X-Varnish: 13
Age: 0
Via: 1.1 varnish-v5
Varnish Status shows the following
varnish.service - Varnish Cache, a high-performance HTTP accelerator
Loaded: loaded (/usr/lib/systemd/system/varnish.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-06-12 13:40:35 SAST; 3h 25min ago
Main PID: 4074 (varnishd)
CGroup: /system.slice/varnish.service
├─4074 /usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
└─4084 /usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
Jun 12 13:40:35 server2.co.za systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
Jun 12 13:40:35 server2.co.za varnishd[4074]: Platform: Linux,3.10.0,x86_64,-junix,-smalloc,-smalloc,-hcritbit
Jun 12 13:40:35 server2.co.za varnishd[4073]: Debug: Platform: Linux,3.10.0,x86_64,-junix,-smalloc,-smalloc,-hcritbit
Jun 12 13:40:35 server2.co.za varnishd[4074]: Child (4084) Started
Jun 12 13:40:35 server2.co.za varnishd[4073]: Debug: Child (4084) Started
Jun 12 13:40:35 server2.co.za varnishd[4074]: Child (4084) said Child starts
Jun 12 13:40:35 server2.co.za systemd[1]: Started Varnish Cache, a high-performance HTTP accelerator.

It seems Apache sends a Pragma header, the quick fix is to write "unset beresp.http.Pragma" in vcl_backend_response and it will unset the pragma header and start caching content, but you might want to check why Apache sends the header.

Related

Yum can't get repo update but curl can get url

In a Oracle Linux docker image, I can get this url with curl :
# curl -I https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/repodata/repomd.xml
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: application/xml
ETag: "28f9cc1bb8a41a0e928e2e36ff54b46f:1675908836.186431"
Last-Modified: Thu, 09 Feb 2023 02:12:54 GMT
Server: AkamaiNetStorage
Content-Length: 3697
Date: Mon, 13 Feb 2023 15:51:36 GMT
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
But, I can't pull update from yum
# yum update
Loaded plugins: ovl, ulninfo
https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 503 - Service Unavailable
Any idea to debug the http GET request of yum ?
Thank you for your help

HAproxy 1.5 fails to proxy properly to backend

I'm puzzled why this FE doesn't seem to connect me to the BE through my HAproxy:
defaults
mode http
log global
option httplog
option dontlognull
source 0.0.0.0 usesrc clientip # transparent proxy mode
frontend fe-kb
bind :8081 ssl crt /etc/haproxy/ssl/ssl-key.pem
default_backend be-kb
backend be-kb
server afnB afnB:1080 check
I get this in HA http log:
Jan 9 17:25:04 localhost haproxy[17266]: <ip redacted>:51396 [09/Jan/2016:17:24:44.544] fe-kb~ be-kb/afnB 31/0/-1/-1/20036 503 212 - - cC-- 0/0/0/0/3 0/0 "GET / HTTP/1.1"
I can connect fine from HAproxy CLI (selinux is disabled):
[root#hapA ~]# telnet afnB 1080
Trying 10.45.69.14...
Connected to afnB.
Escape character is '^]'.
GET / HTTP/1.0
HTTP/1.1 200 OK
Server: nginx/1.9.9
Date: Sat, 09 Jan 2016 16:40:44 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Wed, 09 Dec 2015 15:05:19 GMT
Connection: close
ETag: "5668432f-264"
Accept-Ranges: bytes
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Figured it out, the default route wasn't through my HAproxy and as I was trying to do transparency of the source IPs well ;)
Thanks for watching!

HTTP pipelining request text example

Below is an example HTTP 1.1 call with a single page requested :
GET /jq.js HTTP/1.1
Host: 127.0.0.1
Accept: */*
I understand with HTTP Pipelining, multiple requests can be sent without breaking the connection.
Can someone post, some text example of how this request will be sent to the server, I want to be able to do it over the command line or with PHP sockets.
Does support for pipelining need to enabled on the web-server as well?
Is pipelining supported by major Web-servers(apache, nginx) by default or does it need to be enabled
From w3c protocol details:
8.1.2.2 Pipelining
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.
Clients which assume persistent connections and pipeline immediately after connection establishment SHOULD be prepared to retry their connection if the first pipelined attempt fails. If a client does such a retry, it MUST NOT pipeline before it knows the connection is persistent. Clients MUST also be prepared to resend their requests if the server closes the connection before sending all of the corresponding responses.
Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received the response status for the previous request.
So, first fact is that you should be in a KeepAlive status. So you should add Connection: keep-alive keyword in your request headers, but some webservers may still accept pipelining without this keep alive status. On the other hand, this could be rejected by the server, the server may or may not accept your connection in keepalive mode. So, at any time, being in keepalived or not, you may send 3 requests pipelined in one connection, and get only one response.
From this gist we can find a nice way to test it with telnet.
Asking for keepalive with Connection: keep-alive header:
(echo -en "GET /index.html HTTP/1.1\nHost: foo.com\nConnection: keep-alive\n\nGET /index.html HTTP/1.1\nHost: foo.com\n\n"; sleep 10) | telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.lan.
Escape character is '^]'.
HTTP/1.1 200 OK
Date: Sun, 27 Oct 2013 17:51:58 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sun, 04 Mar 2012 15:00:29 GMT
ETag: "56176e-3e-4ba6c121c4761"
Accept-Ranges: bytes
Content-Length: 62
Vary: Accept-Encoding
Keep-Alive: timeout=5, max=100 <======= Keepalive!
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8
<html>
<body>
<h1>test</h1>
</body>
</html>
HTTP/1.1 200 OK
Date: Sun, 27 Oct 2013 17:51:58 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sun, 04 Mar 2012 15:00:29 GMT
ETag: "56176e-3e-4ba6c121c4761"
Accept-Ranges: bytes
Content-Length: 62
Vary: Accept-Encoding
Content-Type: text/html; charset=utf-8
<html>
<body>
<h1>test</h1>
</body>
</html>
It works.
Without asking for Keepalive:
(echo -en "GET /index.html HTTP/1.1\nHost: foo.com\nConnection: keep-alive\n\nGET /index.html HTTP/1.1\nHost: foo.com\n\n"; sleep 10) | telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.lan.
Escape character is '^]'.
HTTP/1.1 200 OK
Date: Sun, 27 Oct 2013 17:49:37 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sun, 04 Mar 2012 15:00:29 GMT
ETag: "56176e-3e-4ba6c121c4761"
Accept-Ranges: bytes
Content-Length: 62
Vary: Accept-Encoding
Content-Type: text/html; charset=utf-8
<html>
<body>
<h1>test</h1>
</body>
</html>
HTTP/1.1 200 OK
Date: Sun, 27 Oct 2013 17:49:37 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sun, 04 Mar 2012 15:00:29 GMT
ETag: "56176e-3e-4ba6c121c4761"
Accept-Ranges: bytes
Content-Length: 62
Vary: Accept-Encoding
Content-Type: text/html; charset=utf-8
<html>
<body>
<h1>test</h1>
</body>
</html>
Connection closed by foreign host.
Same result, I did not ask for it but it looks like a Keepalive answer (closing after 5s which is the value set in Apache). And a pipelined answer, I get my two pages.
Now if I prevent usage of any Keepalive connection in Apache by setting:
Keepalive Off
And restarting it:
(echo -en "GET /index.html HTTP/1.1\nHost: foo.com\nConnection: keep-alive\n\nGET /index.html HTTP/1.1\nHost: foo.com\n\n"; sleep 10) | telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.lan.
Escape character is '^]'.
HTTP/1.1 200 OK
Date: Sun, 27 Oct 2013 18:02:41 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sun, 04 Mar 2012 15:00:29 GMT
ETag: "56176e-3e-4ba6c121c4761"
Accept-Ranges: bytes
Content-Length: 62
Vary: Accept-Encoding
Connection: close
Content-Type: text/html; charset=utf-8
<html>
<body>
<h1>test</h1>
</body>
</html>
Connection closed by foreign host.
Only one answer... So the server can reject my request for pipelining.
Now, for support on servers and browsers, I think your wikipedia source tells enough :-)

Switch from Apache Prefork MPM to Worker MPM on CentOS 6.3

So I switched from prefork to worker and now all I am getting is 500 errors when trying to access my site:
HTTP/1.0 500 Internal Server Error
Date: Tue, 16 Apr 2013 05:55:08 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.3
Cache-Control: max-age=31536000
Expires: Wed, 16 Apr 2014 05:55:08 GMT
Vary: Accept-Encoding,User-Agent
Connection: close
Content-Type: text/html; charset=UTF-8
Any idea? What did I miss?
This is what I did:
uncommented HTTPD=/usr/sbin/httpd.worker from /etc/sysconfig/httpd
And installed: yum install php-zts
And just 500 errors, worst thing is, I cant find any logs with any errors...
You should try using FastCGI instead of PHP ZTS to use Apache MPM Worker. As suggested here:
there is a way to get the performance benefits of using a threaded MPM and still use PHP: using FastCGI
and, on official documentation:
If you want to use a threaded MPM, look at a FastCGI configuration where PHP is running in its own memory space.

Passenger and Apache: X-died: bad chunck size

I am upgrading an old Rails application to the most recent versions of Ruby, Rails, Authlogic, Passenger etc. The new version works OK with WEBrick, but I cannot get it to work with Apache2.2 and Passenger 3.0.7 (the old one worked nicely). The problem is that the requests return a bad chunk size header and no content:
GET http://www.xxx.xxx:6090/user_sessions/new returns:
Cache-Control: max-age=0, private, must-revalidate
Connection: close
Date: Wed, 03 Aug 2011 20:15:17 GMT
ETag: "3fca75eeb48caf4fe548695a588b916d"
Server: Apache/2.2.16 (Unix) Phusion_Passenger/3.0.7
Content-Type: text/html; charset=utf-8
Client-Aborted: die
Client-Date: Wed, 03 Aug 2011 20:15:17 GMT
Client-Peer: xxx.xxx.14.54:6090
Client-Response-Num: 1
Client-Transfer-Encoding: chunked
Set-Cookie:
_postliste_session=BAh7B0kiD3Nlc3Npb25faWQGOgZFRiIlNjQ3MzJmMWE3NDY3ZGQ2YWYwZWEzMjBmMjliYzk5NDZJIhBfY3NyZl90b2tlbgY7AEZJIjFQLy94REd2MGxqMFNlYm1HUWNvOGh5THE4NU5RY2xXODVEWFVLU2EvMUxvPQY7AEY%3D--a1dc7975c23d0139ab92a20a88395ea3c4bc7304; path=/; HttpOnly
Status: 200
X-Died: Bad chunk-size in HTTP response: %lx at /local/share/perl5/vendor_perl/5.8.8/Net/HTTP/Methods.pm line 484.
X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7
X-Runtime: 0.035096
X-UA-Compatible: IE=Edge,chrome=1
I fail to see the way out and would greatly appreciate suggestions to solve the problem.
Regards
Per
Solved:
Installed Apache 2.2.19 from scratch with fresh APU and APR
./configure --prefix=/site/opt/apache-test --with-included-apr
Installed Passenger 3.0.8 and took care to use new APU and APR
Don't know if the APU/APR-stuff was significant, but the reinstallation solved the problem of bad
chunking of the http-messages.
Per