Varnish bypass a large file - apache

I have Varnish installed with the default setting on my Apache web server. Apache listing to port 8080 and Varnish listing to 80.
I have few downloadable files on the website with the sizes 100MB, 500MB and 1GB
The 1GB is not working, when you click on it it will say unavailable page or connection closed by server. The other two are working fine but I'm not sure if this is the correct way to download them.
How do I make varnish bypass these files and get them directly from the web server?
Thank you.

This could be done with check of Content-Length in backend answer, and if it larger than some size, then tag it with some mark and restart request transaction
Example, files with Content-Length >=10,000,00 should be piped:
sub vcl_fetch {
..
if ( beresp.http.Content-Length ~ "[0-9]{8,}" ) {
set req.http.x-pipe-mark = "1";
return(restart);
}
..
}
Then we returned back to checking request receiving and parsing.
Here we can check our mark and perform pipe
sub vcl_recv {
..
if (req.http.x-pipe-mark && req.restarts > 0) {
return(pipe);
}
..
}

In varnish 4, vcl_fetch should be replaced with vcl_backend_response, see https://www.varnish-cache.org/docs/trunk/whats-new/upgrade-4.0.html

Related

Docker Rocket chat Rest api upload file error 413 Entity too large

I am using rocket chat rest API, every thing works good, but when i upload file to rocket chat rest api, it shows error 413 Request Entity Too Large, but when i upload file from website it uploaded any size of fie.
After checking all scenario, I concluded that file size less than and equal to 1 mb is uploaded successfully, and greater than 1 MB shows this error 413 Request Entity Too Large.
I upload file from post man using this url
https://rocket.chat.url/api/v1/rooms.upload/RoomId
Headers:
Content-Type:application/x-www-form-urlencoded
X-Auth-Token:User-Token
X-User-Id:User-Id
Form-Data:
file - selected file
Html result Error
<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>
when File successfully insert it shows following.
{
"success": true
}
After checking many scenarios and search many urls i get solution from this.
I have used rocket chat docker and I append one line to nginx config file.
Solution:
login to ubuntu server
write sudo nano /etc/nginx/nginx.conf and hit enter
Add or update client_max_body_size in
http {
client_max_body_size 8M; #used your exceeded limit instead of 8M
#other lines...
}
Restart nginx by command service nginx restart or systemctl restart nginx
Uploading larger file again, and it is successful.

Is it possible to implement OIDC in front of Nginx Stream with OpenResty?

I would like to know if it is possible to use the OpenResty OIDC module as an authentication proxy within an NGINX stream configuration.
(I don't have acccess to NGINX Plus unfortunately)
I have used NGINX with Stream configurations in the past to proxy access to upstream tcp resources and it works like a charm.
I am currently looking at implementing an OIDC proxy in front of various resources, both static html and dynamic apps, because we have an in-house OIDC IDAM provider. I came across OpenResty, and in particular the lua-resty-oidc module, and thanks to some wonderful guides, (https://medium.com/#technospace/nginx-as-an-openid-connect-rp-with-wso2-identity-server-part-1-b9a63f9bef0a , https://developers.redhat.com/blog/2018/10/08/configuring-nginx-keycloak-oauth-oidc/ ), I got this working in no time for static pages, using an http server nginx config.
I can't get it working for stream configurations though. It looks like the stream module is enabled as standard for OpenResty, but from digging around I don't think the 'access_by_lua_block' function is allowed in the stream context.
This may simply not be supported, which is fair enough when begging off other people's great work, but I wondered if there was any intention to include suport within OpenResty / lua-resty-oidc in the future, or whether anyone knew of a good workaround.
This was my naive attempt to get it working but the server complains about the
'access_by_lua_block' command at run time.
2019/08/22 08:20:44 [emerg] 1#1: "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
nginx: [emerg] "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
events {
worker_connections 1024;
}
stream {
lua_package_path "/usr/local/openresty/?.lua;;";
resolver 168.63.129.16;
lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
lua_ssl_verify_depth 5;
# cache for discovery metadata documents
lua_shared_dict discovery 1m;
# cache for JWKs
lua_shared_dict jwks 1m;
upstream geyser {
server geyser-api.com:3838;
}
server {
listen 443 ssl;
ssl_certificate /usr/local/openresty/nginx/ssl/nginx.crt;
ssl_certificate_key /usr/local/openresty/nginx/ssl/nginx.key;
access_by_lua_block {
local opts = {
redirect_uri_path = "/redirect_uri",
discovery = "https://oidc.provider/discovery",
client_id = "XXXXXXXXXXX",
client_secret = "XXXXXXXXXXX",
ssl_verify = "no",
scope = "openid",
redirect_uri_scheme = "https",
}
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.req.set_header("X-USER", res.id_token.sub)
}
proxy_pass geyser;
}
}
Anyone have any advice?
i don't think that's possible.
However to be sure, you should try creating an issue on the official github
https://github.com/zmartzone/lua-resty-openidc/issues
They helped me solve a similar issue before

SSL redirect changes client IP address read from HTTPResponse

I am using Perfect Framework for my server side application running on an AWS EC2 instance. I am using the following code to get client IP address.
open static func someapi(request: HTTPRequest, _ response: HTTPResponse) {
var clientIP = request.remoteAddress.host }
This was working fine until I installed ssl certificate on my EC2 instance and start redirecting incoming traffic to port 443.
Now this code gives me the ip of my server, i think due to the redirect, Perfect somehow think request comes from itself.
Is there any other method to get client IP address? Or do i have to try something else?
Thanks!
For anyone struggling for the same problem, original client ip can be found in one of the header fields called "xForwardedFor" if there is a redirect, like the following:
var clientIP = request.remoteAddress.host
let forwardInfoResut = request.headers.filter { (item) -> Bool in
item.0 == HTTPRequestHeader.Name.xForwardedFor
}
if let forwardInfo = forwardInfoResut.first {
clientIP = forwardInfo.1
}
Hope this helps somebody, cheers!
Perhaps you should ask the people you are paying for support and whom manage the infrastructure how it works before asking us?
The convention, where an http connection is terminated elsewhere than the server is to inject an x-forwarded-for header. If there is already such a header, the intermediate server injects the client IP address at the front of the list.

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

how to syslog-ng to remote facility

i have a host running syslog-ng. it does all it's stuff locally fine (creating log files etc). however, i would like to forward ALL of it's logs to a remote machine - specifically to one facility on the remote machine (local4). i tried playing around with rewrite (set-facility) and templates within the destination (syntax errors) - but to no avail.
destination remote_server {
udp(\"172.18.192.8\" port (514));
udp(\"172.18.192.9\" port (514));
};
rewrite r_local4 {
set-facility(local4);
};
filter f_alllogs {
level (debug...emerg);
};
log {
source(local);
filter(f_alllogs);
rewrite(r_local4)
destination(remote_server);
};
AFAIK, currently it is not possible to modify the facility of a message in syslog-ng.
Is there a special reason you want to do it?