How to prevent Gunicorn from returning a 'Server' http header? - http-headers

I would like to mask the version or remove the header altogether.

To change the 'Server:' http header, in your conf.py file:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'Microsoft-IIS/6.0'
And use an invocation along the lines of gunicorn -c conf.py wsgi:app
To remove the header altogether, you can monkey-patch gunicorn by replacing its http response class with a subclass that filters out the header. This might be harmless, but is probably not recommended. Put the following in conf.py:
from gunicorn.http import wsgi
class Response(wsgi.Response):
def default_headers(self, *args, **kwargs):
headers = super(Response, self).default_headers(*args, **kwargs)
return [h for h in headers if not h.startswith('Server:')]
wsgi.Response = Response
Tested with gunicorn 18

This hasn't been clearly written here so I'm gonna confirm that the easiest way for the latest version of Gunicorn (20.1.x) is to add following lines into configuration file:
import gunicorn
gunicorn.SERVER = 'undisclosed'

For newer releases (20.0.4): Create a gunicorn.conf.py file with the content below in the directory from where you will run the gunicorn command:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'My WebServer'

It's better to change it to something unique than remove it. You don't want to risk, e.g., spiders thinking you're noncompliant. Changing it to the name of software you aren't using can cause similar problems. Making it unique will prevent the same kind of assumptions ever being made. I recommend something like this:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'intentionally-undisclosed-gensym384763'

You can edit __init__.py to set SERVER_SOFTWARE to whatever you want. But I'd really like the ability to disable this with a flag so I didn't need to reapply the patch when I upgrade.

My mocky-patch free solution, involves wrapping the default_headers method:
import gunicorn.http.wsgi
from six import wraps
def wrap_default_headers(func):
#wraps(func)
def default_headers(*args, **kwargs):
return [header for header in func(*args, **kwargs) if not header.startswith('Server: ')]
return default_headers
gunicorn.http.wsgi.Response.default_headers = wrap_default_headers(gunicorn.http.wsgi.Response.default_headers)

This doesn't directly answer to the question but could address the issue as well and without monkey patching gunicorn.
If you are using gunicorn behind a reverse proxy, as it usually happens, you can set, add, remove or perform a replacement in a response header coming downstream from the backend. In our case the Server header.
I guess every Webserver should have an equivalent feature.
For example, in Caddy 2 (currently in beta) it would be something as simple as:
https://localhost {
reverse_proxy unix//tmp/foo.sock {
header_down Server intentionally-undisclosed-12345678
}
}
For completeness I still add a minimal (but fully working) Caddyfile to handle Server header modification even in manual http->https redirect process (Caddy 2 does it automatically, if you don't override it), which could a bit tricky to figure it out correctly.
http://localhost {
# Fact: the `header` directive has less priority than `redir` (which means
# it's evaluated later), so the header wouldn't be changed (and Caddy would
# shown instead of the faked value).
#
# To override the directive ordering only for this server, instead of
# change the "order" option globally, put the configuration inside a
# route directive.
# ref.
# https://caddyserver.com/docs/caddyfile/options
# https://caddyserver.com/docs/caddyfile/directives/route
# https://caddyserver.com/docs/caddyfile/directives#directive-order
route {
header Server intentionally-undisclosed-12345678
redir https://{host}{uri}
}
}
https://localhost {
reverse_proxy unix//tmp/foo.sock {
header_down Server intentionally-undisclosed-12345678
}
}
To check if it works just use curl as curl --insecure -I http://localhost and curl --insecure -I http://localhost (--insecure because localhost certs are automatically generated as self signed).
It's so simple to setup that you could also think to use it in development (with gunicorn --reload), especially if it resembles your staging/production environment.

Related

angular2 call remote rest api [duplicate]

I'm making an Ajax.request to a remote PHP server in a Sencha Touch 2 application (wrapped in PhoneGap).
The response from the server is the following:
XMLHttpRequest cannot load http://nqatalog.negroesquisso.pt/login.php. Origin http://localhost:8888 is not allowed by Access-Control-Allow-Origin.
How can I fix this problem?
I wrote an article on this issue a while back, Cross Domain AJAX.
The easiest way to handle this if you have control of the responding server is to add a response header for:
Access-Control-Allow-Origin: *
This will allow cross-domain Ajax. In PHP, you'll want to modify the response like so:
<?php header('Access-Control-Allow-Origin: *'); ?>
You can just put the Header set Access-Control-Allow-Origin * setting in the Apache configuration or htaccess file.
It should be noted that this effectively disables CORS protection, which very likely exposes your users to attack. If you don't know that you specifically need to use a wildcard, you should not use it, and instead you should whitelist your specific domain:
<?php header('Access-Control-Allow-Origin: http://example.com') ?>
If you don't have control of the server, you can simply add this argument to your Chrome launcher: --disable-web-security.
Note that I wouldn't use this for normal "web surfing". For reference, see this post: Disable same origin policy in Chrome.
One you use Phonegap to actually build the application and load it onto the device, this won't be an issue.
If you're using Apache just add:
<ifModule mod_headers.c>
Header set Access-Control-Allow-Origin: *
</ifModule>
in your configuration. This will cause all responses from your webserver to be accessible from any other site on the internet. If you intend to only allow services on your host to be used by a specific server you can replace the * with the URL of the originating server:
Header set Access-Control-Allow-Origin: http://my.origin.host
If you have an ASP.NET / ASP.NET MVC application, you can include this header via the Web.config file:
<system.webServer>
...
<httpProtocol>
<customHeaders>
<!-- Enable Cross Domain AJAX calls -->
<remove name="Access-Control-Allow-Origin" />
<add name="Access-Control-Allow-Origin" value="*" />
</customHeaders>
</httpProtocol>
</system.webServer>
This was the first question/answer that popped up for me when trying to solve the same problem using ASP.NET MVC as the source of my data. I realize this doesn't solve the PHP question, but it is related enough to be valuable.
I am using ASP.NET MVC. The blog post from Greg Brant worked for me. Ultimately, you create an attribute, [HttpHeaderAttribute("Access-Control-Allow-Origin", "*")], that you are able to add to controller actions.
For example:
public class HttpHeaderAttribute : ActionFilterAttribute
{
public string Name { get; set; }
public string Value { get; set; }
public HttpHeaderAttribute(string name, string value)
{
Name = name;
Value = value;
}
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
filterContext.HttpContext.Response.AppendHeader(Name, Value);
base.OnResultExecuted(filterContext);
}
}
And then using it with:
[HttpHeaderAttribute("Access-Control-Allow-Origin", "*")]
public ActionResult MyVeryAvailableAction(string id)
{
return Json( "Some public result" );
}
As Matt Mombrea is correct for the server side, you might run into another problem which is whitelisting rejection.
You have to configure your phonegap.plist. (I am using a old version of phonegap)
For cordova, there might be some changes in the naming and directory. But the steps should be mostly the same.
First select Supporting files > PhoneGap.plist
then under "ExternalHosts"
Add a entry, with a value of perhaps "http://nqatalog.negroesquisso.pt"
I am using * for debugging purposes only.
This might be handy for anyone who needs to an exception for both 'www' and 'non-www' versions of a referrer:
$referrer = $_SERVER['HTTP_REFERER'];
$parts = parse_url($referrer);
$domain = $parts['host'];
if($domain == 'google.com')
{
header('Access-Control-Allow-Origin: http://google.com');
}
else if($domain == 'www.google.com')
{
header('Access-Control-Allow-Origin: http://www.google.com');
}
If you're writing a Chrome Extension and get this error, then be sure you have added the API's base URL to your manifest.json's permissions block, example:
"permissions": [
"https://itunes.apple.com/"
]
I will give you a simple solution for this one. In my case I don't have access to a server. In that case you can change the security policy in your Google Chrome browser to allow Access-Control-Allow-Origin. This is very simple:
Create a Chrome browser shortcut
Right click short cut icon -> Properties -> Shortcut -> Target
Simple paste in "C:\Program Files\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files --disable-web-security.
The location may differ. Now open Chrome by clicking on that shortcut.
I've run into this a few times when working with various APIs. Often a quick fix is to add "&callback=?" to the end of a string. Sometimes the ampersand has to be a character code, and sometimes a "?": "?callback=?" (see Forecast.io API Usage with jQuery)
This is because of same-origin policy. See more at Mozilla Developer Network or Wikipedia.
Basically, in your example, you to need load the http://nqatalog.negroesquisso.pt/login.php page only from nqatalog.negroesquisso.pt, not localhost.
if you're under apache, just add an .htaccess file to your directory with this content:
Header set Access-Control-Allow-Origin: *
Header set Access-Control-Allow-Headers: content-type
Header set Access-Control-Allow-Methods: *
In Ruby on Rails, you can do in a controller:
headers['Access-Control-Allow-Origin'] = '*'
If you get this in Angular.js, then make sure you escape your port number like this:
var Project = $resource(
'http://localhost\\:5648/api/...', {'a':'b'}, {
update: { method: 'PUT' }
}
);
See here for more info on it.
You may make it work without modifiying the server by making the broswer including the header Access-Control-Allow-Origin: * in the HTTP OPTIONS' responses.
In Chrome, use this extension. If you are on Mozilla check this answer.
We also have same problem with phonegap application tested in chrome.
One windows machine we use below batch file everyday before Opening Chrome.
Remember before running this you need to clean all instance of chrome from task manager or you can select chrome to not to run in background.
BATCH: (use cmd)
cd D:\Program Files (x86)\Google\Chrome\Application\chrome.exe --disable-web-security
In Ruby Sinatra
response['Access-Control-Allow-Origin'] = '*'
for everyone or
response['Access-Control-Allow-Origin'] = 'http://yourdomain.name'
When you receive the request you can
var origin = (req.headers.origin || "*");
than when you have to response go with something like that:
res.writeHead(
206,
{
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Origin': origin,
}
);

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

Apache, LDAP and WSGI encoding issue

I am using Apache 2.4.7 with mod_wsgi 3.4 on Ubuntu 14.04.2 (x86_64) and python 3.4.0. My python app relies on apache to perform user authentication against our company’s LDAP server (MS Active Directory 2008). It also passes some additional LDAP data to the python app using the OS environment. In the apache config, I query the LDAP like so:
…
AuthLDAPURL "ldap://server:389/DC=company,DC=lokal?sAMAccountName,sn,givenName,mail,memberOf?sub?(objectClass=*)"
AuthLDAPBindDN …
AuthLDAPBindPassword …
AuthLDAPRemoteUserAttribute sAMAccountName
AuthLDAPAuthorizePrefix AUTHENTICATE_
…
This passes some user data to my WSGI script where I handle the info as follows:
# Make sure the packages from the virtualenv are found
import site
site.addsitedir('/home/user/.virtualenvs/ispot-cons/lib/python3.4/site-packages')
# Patch path for app (so that libispot can be found)
import sys
sys.path.insert(0, '/var/www/my-app/')
import os
from libispot.web import app as _application
def application(environ, start_response):
os.environ['REMOTE_USER'] = environ.get('REMOTE_USER', "")
os.environ['REMOTE_USER_FIRST_NAME'] = environ.get('AUTHENTICATE_GIVENNAME', "")
os.environ['REMOTE_USER_LAST_NAME'] = environ.get('AUTHENTICATE_SN', "")
os.environ['REMOTE_USER_EMAIL'] = environ.get('AUTHENTICATE_MAIL', "")
os.environ['REMOTE_USER_GROUPS'] = environ.get('AUTHENTICATE_MEMBEROF', "")
return _application(environ, start_response)
I can then access this info in my python app using os.environ.get(…). (BTW: If you have a more elegant solution, please let me know!)
The problem is that some of the user names contain special characters (German umlauts, e.g., äöüÄÖÜ) that are not encoded correctly. So, for example, the name Tölle arrives in my python app as Tölle.
Obviously, this is an encoding problem, because
$ echo "Tölle" | iconv --from utf-8 --to latin1
gives me the correct Tölle.
Another observation that might help: in my apache logs I found the character ü represented as \xc3\x83\xc2\xbc.
I told my Apache in /etc/apache2/envvars to use LANG=de_DE.UTF-8 and python 3 is utf-8 aware as well. I can’t seem to specify anything about my LDAP server. So my question is: where is the encoding getting mixed up and how do I mend it?
It is bad practice to copy the values to os.environ on each request as this will fail miserable if the WSGI server is running with a multithreaded configuration, with concurrent requests interfering with each other. Look at thread locals instead.
As to the issue of encoded data from LDAP, if I under stand the problem, you would need to do:
"Tölle".encode('latin-1').decode('utf-8')

How to handle https traffic using libmproxy?

I want to implement a proxy server that intercepts both http and https requests. I came across libmproxy (http://mitmproxy.org/doc/scripting/libmproxy.html) that it is SSL-capable. I start with this simplest proxy that just prints the headers of all requests and responses, and forwards them to clients and servers normally.
#!/usr/bin/env python
from libmproxy import controller, proxy
import os
class Master(controller.Master):
def __init__(self, server):
controller.Master.__init__(self, server)
self.stickyhosts = {}
def run(self):
try:
return controller.Master.run(self)
except KeyboardInterrupt:
self.shutdown()
def handle_request(self, msg):
print "handle request.................................................."
print msg.headers
msg.reply()
def handle_response(self, msg):
print "handle response................................................."
print msg.headers
msg.reply()
config = proxy.ProxyConfig(
cacert = os.path.expanduser("~/.mitmproxy/mitmproxy-ca.pem")
)
server = proxy.ProxyServer(config, 1234)
m = Master(server)
m.run()
Then I configure http and ssl proxy in firefox to 127.0.0.1 port 1234. http seems to work fine as I can see all the headers are printed out. However, when the browser sends https requests, the proxy server does not print anything at all, and the browser displays "the connect was interrupted" error.
Further investigation reveals that the https requests go though the proxy server but not controller.Master. I see that proxy.ProxyHandler.establish_ssl() is being called when there is an https request, but the request does not go though controller.Master.handle_request(). Despite that establish_ssl() is called, the browser does not seem to get any response back. I test this with https://www.google.com.
First, how can I make proxy.ProxyHandler works properly with https requests/responses? Second, how can I modify controller.Master so that it can intercept https requests? I'm also open to other tools that I can build a custom http/https proxy server on top of.
You need to install the mitmproxy CA in the browser you are testing with.
Please see details here ("Installing the mitmproxy CA" section):
http://mitmproxy.org/doc/ssl.html
This solved the problem for me.

Twisted listenSSL virtualhosts

Currently using a really simple Twisted NameVirtualHost coupled with some JSON config files to serve really basic content in one Site object. The resources being served by Twisted are all WSGI objects built in flask.
I was wondering on how to go about wrapping the connections to these domains with an SSLContext, since reactor.listenSSL takes one and only one context, it isn't readily apparent how to give each domain/subdomain it's own crt/key pair. Is there any way to set up named virtual hosting with ssl for each domain that doesn't require proxying? I can't find any Twisted examples that use NameVirtualHost with SSL, and they only thing I could get to work is hook on the reactor listening on port 443 with only one domain's context?
I was wondering if anyone has attempted this?
My simple server without any SSL processing:
https://github.com/DeaconDesperado/twsrv/blob/master/service.py
TLS (the name for the modern protocol which replaces SSL) only very recently supports the feature you're looking for. The feature is called Server Name Indication (or SNI). It is supported by modern browsers on modern platforms, but not some older but still widely used platforms (see the wikipedia page for a list of browsers with support).
Twisted has no specific, built-in support for this. However, it doesn't need any. pyOpenSSL, upon which Twisted's SSL support is based, does support SNI.
The set_tlsext_servername_callback pyOpenSSL API gives you the basic mechanism to build the behavior you want. This lets you define a callback which is given access to the server name requested by the client. At this point, you can specify the key/certificate pair you want to use for the connection. You can find an example demonstrating the use of this API in pyOpenSSL's examples directory.
Here's an excerpt from that example to give you the gist:
def pick_certificate(connection):
try:
key, cert = certificates[connection.get_servername()]
except KeyError:
pass
else:
new_context = Context(TLSv1_METHOD)
new_context.use_privatekey(key)
new_context.use_certificate(cert)
connection.set_context(new_context)
server_context = Context(TLSv1_METHOD)
server_context.set_tlsext_servername_callback(pick_certificate)
You can incorporate this approach into a customized context factory and then supply that context factory to the listenSSL call.
Just to add some closure to this one, and for future searches, here is the example code for the echo server from the examples that prints the SNI:
from twisted.internet import ssl, reactor
from twisted.internet.protocol import Factory, Protocol
class Echo(Protocol):
def dataReceived(self, data):
self.transport.write(data)
def pick_cert(connection):
print('Received SNI: ', connection.get_servername())
if __name__ == '__main__':
factory = Factory()
factory.protocol = Echo
with open("keys/ca.pem") as certAuthCertFile:
certAuthCert = ssl.Certificate.loadPEM(certAuthCertFile.read())
with open("keys/server.key") as keyFile:
with open("keys/server.crt") as certFile:
serverCert = ssl.PrivateCertificate.loadPEM(
keyFile.read() + certFile.read())
contextFactory = serverCert.options(certAuthCert)
ctx = contextFactory.getContext()
ctx.set_tlsext_servername_callback(pick_cert)
reactor.listenSSL(8000, factory, contextFactory)
reactor.run()
And because getting OpenSSL to work can always be tricky, here is the OpenSSL statement you can use to connect to it:
openssl s_client -connect localhost:8000 -servername hello_world -cert keys/client.crt -key keys/client.key
Running the above python code against pyOpenSSL==0.13, and then running the s_client command above, will print this to the screen:
('Received SNI: ', 'hello_world')
There is now a txsni project that takes care of finding the right certificates per request. https://github.com/glyph/txsni