I have tried to include edge side tags <esi:include> in my nuxt project.
But it is not rendering the element when it is serving from varnish.
My vue file:
<div>
<Header/>
<esi:include src="http://widget-webapp/header-bar" />
<Footer/>
</div>
I have included the ESI tag in the ignoreElement in the nuxt config to ignore the ESI element warning.
vue: {
config: {
productionTip: false,
devtools: process.env.NODE_ENV === 'prod' || process.env.NODE_ENV === 'stage' ? false : true,
ignoredElements: ['esi:include']
}
},
I have seen that there is a library for React. Is there any type of library or way to include ESI tag in a vue SSR? Any sort of help will be appreciated. Thanks.
Vcl code for ESI:
if (bereq.http.host ~ "example\.com$") {
set beresp.grace = 24h;
# Enable ESI
if (beresp.http.content-type ~ "html|xml") {
set beresp.do_esi = true;}
Suggested approach
Keep in mind that Varnish doesn't parse ESI tags automatically. You need to instruct Varnish to do this for the right pages using VCL logic.
Have a look at https://www.varnish-software.com/developers/tutorials/example-vcl-template/#14-esi-support. It describes a common ESI configuration for Varnish that is part of our example VCL file.
This is the code:
sub vcl_recv {
set req.http.Surrogate-Capability = "key=ESI/1.0";
}
sub vcl_backend_response {
if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
unset beresp.http.Surrogate-Control;
set beresp.do_esi = true;
}
}
Meaning of the Surrogate headers
What this snippet does is check whether the backend sent a Surrogate-Control header containing an ESI/1.O key prior to storing the object in the cache.
It allows the application to send a header like Surrogate-Control: content="ESI/1.0", which Varnish processes and uses to activate ESI processing.
However, the web application has no guarantees that there will be a caching server that supports ESI in front of it. That's why Varnish exposes a Surrogate-Capability header that announces the capabilities it has on the edge.
The header that Varnish sends is Surrogate-Capability: "key=ESI/1.0".
Using the Surrogate headers in your application
In your application you should check whether or not a Surrogate-Capability request header is sent that contains the term ESI/1.0. If that is the case, you can espose <esi:include src="/xyz" /> tags.
Just make sure you set the Surrogate-Control: content="ESI/1.0" response header so that Varnish knows that it should process them.
If the Surrogate-Capability request header is not set or doesn't contain the right terms, you shouldn't expose ESI tags and instead render that content on the web server, rather than on the edge.
Update: handling ESI with the VCL code you provided
Based on the VCL code you provided, ESI processing will only take place if the request host matches example.com and any value prefixing it.
I'm not sure if this is a redacted host name, but please make sure the Host header matches the one you send the request with.
The ESI parsing also happens when the Content-Type response header contains html or xml. Please also ensure the content is either HTML or XML.
How to debug?
The assumptions I described in the paragraph above are pretty standard and might not be 100% helpful in your situation. That's why the proper debugging commands could make sense.
Assuming that the homepage of your service contains ESI tags, the following varnishlog command can be used to debug.
Please adjust the URL to the one you're using and attach the output of the command to your question. This output can be used to figure out what's really going on.
sudo varnishlog -g request -q "ReqUrl eq '/'"
Related
I just want to make sure I have no issue here. Does anybody know what causes the
2017-03-17 07:59:17.5838|1|Microsoft.AspNetCore.Hosting.Internal.WebHost|INFO|Request starting HTTP/1.1 GET http://192.168.20.57:8081/hardware/configuration/active application/json
2017-03-17 07:59:17.5868|4|Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware|DEBUG|The request path /hardware/configuration/active does not match a supported file type
I'm exposing the Web API by Kestrel only (no IISIntegration).
The request header contains
GET /hardware/configuration/active HTTP/1.1
Accept: application/json, text/plain, */*
Content-Type: application/json
Defining
[Produces("application/json")]
explicitly in my controller has no effect.
Set StaticFileOptions.ServeUnknownFileTypes to true:
app.UseStaticFiles(new StaticFileOptions()
{
FileProvider = new PhysicalFileProvider("mypath"),
ServeUnknownFileTypes = true // <<<<<<
});
or find out where the unknown types can be extended (please let me know in the latter case).
After the request comes in by kestrel (which is your first log row).
It first goes through a middleware pipeline until it reaches the WebApi middleware.
As you can see in the second logging row: Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware. It reached the StaticFileMiddleware and not the WebApi Middleware.
Probably the staticFileHandler finds a file there in the wwwroot folder and thus returns this message? Or Even before it tries to check the filesystem it checks if it is an allowed/known file extension and this is not the case, thus this second log message.
I'm making an Ajax.request to a remote PHP server in a Sencha Touch 2 application (wrapped in PhoneGap).
The response from the server is the following:
XMLHttpRequest cannot load http://nqatalog.negroesquisso.pt/login.php. Origin http://localhost:8888 is not allowed by Access-Control-Allow-Origin.
How can I fix this problem?
I wrote an article on this issue a while back, Cross Domain AJAX.
The easiest way to handle this if you have control of the responding server is to add a response header for:
Access-Control-Allow-Origin: *
This will allow cross-domain Ajax. In PHP, you'll want to modify the response like so:
<?php header('Access-Control-Allow-Origin: *'); ?>
You can just put the Header set Access-Control-Allow-Origin * setting in the Apache configuration or htaccess file.
It should be noted that this effectively disables CORS protection, which very likely exposes your users to attack. If you don't know that you specifically need to use a wildcard, you should not use it, and instead you should whitelist your specific domain:
<?php header('Access-Control-Allow-Origin: http://example.com') ?>
If you don't have control of the server, you can simply add this argument to your Chrome launcher: --disable-web-security.
Note that I wouldn't use this for normal "web surfing". For reference, see this post: Disable same origin policy in Chrome.
One you use Phonegap to actually build the application and load it onto the device, this won't be an issue.
If you're using Apache just add:
<ifModule mod_headers.c>
Header set Access-Control-Allow-Origin: *
</ifModule>
in your configuration. This will cause all responses from your webserver to be accessible from any other site on the internet. If you intend to only allow services on your host to be used by a specific server you can replace the * with the URL of the originating server:
Header set Access-Control-Allow-Origin: http://my.origin.host
If you have an ASP.NET / ASP.NET MVC application, you can include this header via the Web.config file:
<system.webServer>
...
<httpProtocol>
<customHeaders>
<!-- Enable Cross Domain AJAX calls -->
<remove name="Access-Control-Allow-Origin" />
<add name="Access-Control-Allow-Origin" value="*" />
</customHeaders>
</httpProtocol>
</system.webServer>
This was the first question/answer that popped up for me when trying to solve the same problem using ASP.NET MVC as the source of my data. I realize this doesn't solve the PHP question, but it is related enough to be valuable.
I am using ASP.NET MVC. The blog post from Greg Brant worked for me. Ultimately, you create an attribute, [HttpHeaderAttribute("Access-Control-Allow-Origin", "*")], that you are able to add to controller actions.
For example:
public class HttpHeaderAttribute : ActionFilterAttribute
{
public string Name { get; set; }
public string Value { get; set; }
public HttpHeaderAttribute(string name, string value)
{
Name = name;
Value = value;
}
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
filterContext.HttpContext.Response.AppendHeader(Name, Value);
base.OnResultExecuted(filterContext);
}
}
And then using it with:
[HttpHeaderAttribute("Access-Control-Allow-Origin", "*")]
public ActionResult MyVeryAvailableAction(string id)
{
return Json( "Some public result" );
}
As Matt Mombrea is correct for the server side, you might run into another problem which is whitelisting rejection.
You have to configure your phonegap.plist. (I am using a old version of phonegap)
For cordova, there might be some changes in the naming and directory. But the steps should be mostly the same.
First select Supporting files > PhoneGap.plist
then under "ExternalHosts"
Add a entry, with a value of perhaps "http://nqatalog.negroesquisso.pt"
I am using * for debugging purposes only.
This might be handy for anyone who needs to an exception for both 'www' and 'non-www' versions of a referrer:
$referrer = $_SERVER['HTTP_REFERER'];
$parts = parse_url($referrer);
$domain = $parts['host'];
if($domain == 'google.com')
{
header('Access-Control-Allow-Origin: http://google.com');
}
else if($domain == 'www.google.com')
{
header('Access-Control-Allow-Origin: http://www.google.com');
}
If you're writing a Chrome Extension and get this error, then be sure you have added the API's base URL to your manifest.json's permissions block, example:
"permissions": [
"https://itunes.apple.com/"
]
I will give you a simple solution for this one. In my case I don't have access to a server. In that case you can change the security policy in your Google Chrome browser to allow Access-Control-Allow-Origin. This is very simple:
Create a Chrome browser shortcut
Right click short cut icon -> Properties -> Shortcut -> Target
Simple paste in "C:\Program Files\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files --disable-web-security.
The location may differ. Now open Chrome by clicking on that shortcut.
I've run into this a few times when working with various APIs. Often a quick fix is to add "&callback=?" to the end of a string. Sometimes the ampersand has to be a character code, and sometimes a "?": "?callback=?" (see Forecast.io API Usage with jQuery)
This is because of same-origin policy. See more at Mozilla Developer Network or Wikipedia.
Basically, in your example, you to need load the http://nqatalog.negroesquisso.pt/login.php page only from nqatalog.negroesquisso.pt, not localhost.
if you're under apache, just add an .htaccess file to your directory with this content:
Header set Access-Control-Allow-Origin: *
Header set Access-Control-Allow-Headers: content-type
Header set Access-Control-Allow-Methods: *
In Ruby on Rails, you can do in a controller:
headers['Access-Control-Allow-Origin'] = '*'
If you get this in Angular.js, then make sure you escape your port number like this:
var Project = $resource(
'http://localhost\\:5648/api/...', {'a':'b'}, {
update: { method: 'PUT' }
}
);
See here for more info on it.
You may make it work without modifiying the server by making the broswer including the header Access-Control-Allow-Origin: * in the HTTP OPTIONS' responses.
In Chrome, use this extension. If you are on Mozilla check this answer.
We also have same problem with phonegap application tested in chrome.
One windows machine we use below batch file everyday before Opening Chrome.
Remember before running this you need to clean all instance of chrome from task manager or you can select chrome to not to run in background.
BATCH: (use cmd)
cd D:\Program Files (x86)\Google\Chrome\Application\chrome.exe --disable-web-security
In Ruby Sinatra
response['Access-Control-Allow-Origin'] = '*'
for everyone or
response['Access-Control-Allow-Origin'] = 'http://yourdomain.name'
When you receive the request you can
var origin = (req.headers.origin || "*");
than when you have to response go with something like that:
res.writeHead(
206,
{
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Origin': origin,
}
);
I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.
I have Varnish installed with the default setting on my Apache web server. Apache listing to port 8080 and Varnish listing to 80.
I have few downloadable files on the website with the sizes 100MB, 500MB and 1GB
The 1GB is not working, when you click on it it will say unavailable page or connection closed by server. The other two are working fine but I'm not sure if this is the correct way to download them.
How do I make varnish bypass these files and get them directly from the web server?
Thank you.
This could be done with check of Content-Length in backend answer, and if it larger than some size, then tag it with some mark and restart request transaction
Example, files with Content-Length >=10,000,00 should be piped:
sub vcl_fetch {
..
if ( beresp.http.Content-Length ~ "[0-9]{8,}" ) {
set req.http.x-pipe-mark = "1";
return(restart);
}
..
}
Then we returned back to checking request receiving and parsing.
Here we can check our mark and perform pipe
sub vcl_recv {
..
if (req.http.x-pipe-mark && req.restarts > 0) {
return(pipe);
}
..
}
In varnish 4, vcl_fetch should be replaced with vcl_backend_response, see https://www.varnish-cache.org/docs/trunk/whats-new/upgrade-4.0.html
I would like to mask the version or remove the header altogether.
To change the 'Server:' http header, in your conf.py file:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'Microsoft-IIS/6.0'
And use an invocation along the lines of gunicorn -c conf.py wsgi:app
To remove the header altogether, you can monkey-patch gunicorn by replacing its http response class with a subclass that filters out the header. This might be harmless, but is probably not recommended. Put the following in conf.py:
from gunicorn.http import wsgi
class Response(wsgi.Response):
def default_headers(self, *args, **kwargs):
headers = super(Response, self).default_headers(*args, **kwargs)
return [h for h in headers if not h.startswith('Server:')]
wsgi.Response = Response
Tested with gunicorn 18
This hasn't been clearly written here so I'm gonna confirm that the easiest way for the latest version of Gunicorn (20.1.x) is to add following lines into configuration file:
import gunicorn
gunicorn.SERVER = 'undisclosed'
For newer releases (20.0.4): Create a gunicorn.conf.py file with the content below in the directory from where you will run the gunicorn command:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'My WebServer'
It's better to change it to something unique than remove it. You don't want to risk, e.g., spiders thinking you're noncompliant. Changing it to the name of software you aren't using can cause similar problems. Making it unique will prevent the same kind of assumptions ever being made. I recommend something like this:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'intentionally-undisclosed-gensym384763'
You can edit __init__.py to set SERVER_SOFTWARE to whatever you want. But I'd really like the ability to disable this with a flag so I didn't need to reapply the patch when I upgrade.
My mocky-patch free solution, involves wrapping the default_headers method:
import gunicorn.http.wsgi
from six import wraps
def wrap_default_headers(func):
#wraps(func)
def default_headers(*args, **kwargs):
return [header for header in func(*args, **kwargs) if not header.startswith('Server: ')]
return default_headers
gunicorn.http.wsgi.Response.default_headers = wrap_default_headers(gunicorn.http.wsgi.Response.default_headers)
This doesn't directly answer to the question but could address the issue as well and without monkey patching gunicorn.
If you are using gunicorn behind a reverse proxy, as it usually happens, you can set, add, remove or perform a replacement in a response header coming downstream from the backend. In our case the Server header.
I guess every Webserver should have an equivalent feature.
For example, in Caddy 2 (currently in beta) it would be something as simple as:
https://localhost {
reverse_proxy unix//tmp/foo.sock {
header_down Server intentionally-undisclosed-12345678
}
}
For completeness I still add a minimal (but fully working) Caddyfile to handle Server header modification even in manual http->https redirect process (Caddy 2 does it automatically, if you don't override it), which could a bit tricky to figure it out correctly.
http://localhost {
# Fact: the `header` directive has less priority than `redir` (which means
# it's evaluated later), so the header wouldn't be changed (and Caddy would
# shown instead of the faked value).
#
# To override the directive ordering only for this server, instead of
# change the "order" option globally, put the configuration inside a
# route directive.
# ref.
# https://caddyserver.com/docs/caddyfile/options
# https://caddyserver.com/docs/caddyfile/directives/route
# https://caddyserver.com/docs/caddyfile/directives#directive-order
route {
header Server intentionally-undisclosed-12345678
redir https://{host}{uri}
}
}
https://localhost {
reverse_proxy unix//tmp/foo.sock {
header_down Server intentionally-undisclosed-12345678
}
}
To check if it works just use curl as curl --insecure -I http://localhost and curl --insecure -I http://localhost (--insecure because localhost certs are automatically generated as self signed).
It's so simple to setup that you could also think to use it in development (with gunicorn --reload), especially if it resembles your staging/production environment.