Strict-Transport-Security not working or testing wrong? - apache

I read an article about the HTTP header at https://www.globalsign.com/en/blog/what-is-hsts-and-how-do-i-use-it/ and I am trying to see if our production Jetty server can use it. Before modifying the production code, I wrote a simple Java code that has Jetty running and added the header, the curl script also dumps the header but in my browser, I can always access my web page via HTTP.
Here is the sample code I wrote
public class BlockingServlet extends HttpServlet{
protected void doGet(
HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("application/json");
response.setStatus(HttpServletResponse.SC_OK);
response.addHeader("X-Xss-Protection", "1; mode=block");
response.addHeader("Strict-Transport-Security", "max-age=31536000; includeSubDomains; preload");
response.getWriter().println("{ \"status\": \"ok\"}");
}
}
Now here is my curl call and the output:
curl http://localhost:8090/status -I
HTTP/1.1 200 OK
Date: Fri, 13 Apr 2018 01:28:20 GMT
Content-Type: application/json
X-Xss-Protection: 1; mode=block
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Content-Length: 18
Server: Jetty(9.4.3.v20170317)
Also, when I type http://localhost:8090/status in my browser, I can see the JSON output which after populating the header must not work as my server is not running https at all because if I explicitly type https I cannot access the url.
What is wrong in my understanding?

Related

How to assign Set-Cookie response parameter to Secure using .NET Core 2.1

I am having issues assigning the Set-Cookie Response Header to Secure in my .NET Core 2.1 application.
In my Startup.cs I have set CookieSecurePolicy to Always within ConfigureServices():
services.AddSession(options =>
{
options.Cookie.SecurePolicy = CookieSecurePolicy.Always;
});
However, when I run a cURL, it still displays my IP & path, instead of "Secure" like so:
lewallen$ cURL -il https://example.com
HTTP/1.1 200 OK
Date: Wed, 11 Sep 2019 17:47:54 GMT
Content-Type: text/html; charset=utf-8
Server: Kestrel
Transfer-Encoding: chunked
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
Set-Cookie: SERVERID=IP ADDRESS EXPOSED HERE; path=/ <-- this should be secure
Cache-control: private
I've also tried adding it to my services.Configure() as well:
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => false;
options.MinimumSameSitePolicy = SameSiteMode.None;
options.Secure = CookieSecurePolicy.Always; <-- Placed here
});
But the Set-Cookie header still does not display as Secure. What am I missing? Thank you!
The Secure flag indicates whether the cookie can be transmitted over HTTP. CookieSecurePolicy.Always means that it will only be transmitted when the connection is secure (HTTPS).
There is no built-in additional encryption for actual cookie values. If you are using HTTPS then they are encrypted by virtue of being in the header, which is itself in encrypted, at least in transit. Once on the client machine though, it exists as you sent it.
If you want to actually encrypt the value, you need to do that before you set the cookie, and then literally set the cookie with the pre-encrypted value. Then, when you read the cookie, you will need to decrypt it yourself as well.

UseExceptionHandler() middleware in .NET Core MVC prevents response headers setting

Startup.cs:
// ...
app.Use(async (context, next) =>
{
context.Response.Headers.Add("X-Frame-Options", "DENY");
context.Response.Headers.Add("X-Content-Type-Options", "nosniff");
context.Response.Headers.Add("Server", "ololo");
await next();
});
if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); }
else { app.UseExceptionHandler("/Home/Error"); }
app.UseStaticFiles();
app.UseAuthentication();
// ...
When everything is fine, I get the following headers, as expected:
HTTP/1.1 200 OK
Connection: close
Date: Mon, 30 Jul 2018 18:39:33 GMT
Content-Type: text/html; charset=utf-8
Server: ololo
Transfer-Encoding: chunked
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
So Server, X-Frame-Options and X-Content-Type-Options headers are overridden.
But if I have an unhandled exception in my code, then I get these headers:
HTTP/1.1 500 Internal Server Error
Connection: close
Date: Mon, 30 Jul 2018 18:35:49 GMT
Content-Type: text/html; charset=utf-8
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1
So headers are not overridden.
Why is that? Is it by design? Does Exceptions middleware work differently so it doesn't go through the whole pipeline?
dotnet --info
.NET Command Line Tools (2.1.4)
Product Information:
Version: 2.1.4
Commit SHA-1 hash: 5e8add2190
Microsoft .NET Core Shared Framework Host
Version : 2.0.5
Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54
A more reliable way to set the headers in any case would be to use the OnStarting callback. See docs.
Adds a delegate to be invoked just before response headers will be sent to the client.
public async Task Invoke(HttpContext context)
{
context.Response.OnStarting(() =>
{
context.Response.Headers.Add("X-Frame-Options", "DENY");
context.Response.Headers.Add("X-Content-Type-Options", "nosniff");
context.Response.Headers.Add("Server", "ololo");
return Task.CompletedTask;
});
await _next(context);
}
OnStarting will be invoked, just before the response headers are written to the wire. This allows you to set the headers after the exception middleware did handle it

CSS is not always gzipped why?

In my Firefox or Chrome if I check the HTTP header the result are always with Content-Encoding: gzip. But I have customers reporting that they see "transfer-encoding: chunked" instead and the request are not gzipped.
http://www.example.com/public/css/style.min.css
If I or the customer do a gzip compression online check it's confirmed gzip is active.
https://checkgzipcompression.com = gzip!
But if I use a checker like this one. http://onlinecurl.com/
I also get the transfer-encoding: chunked
Request:
GET /style/css.css HTTP/1.1
Host: www.example.com
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: ...
Accept: /
Referer: http://www.example.com/
Accept-Encoding: gzip, deflate
Accept-Language: ...
Cookie: ...
Response:
HTTP/1.1 200 OK
Age: 532948
cache-control: public, max-age=604800
Content-Type: text/css
Date: Wed, 28 Jun 2017 12:35:07 GMT
ETag: "5349e8d595dfd21:0"
Last-Modified: Wed, 07 Jun 2017 13:56:17 GMT
Server: Microsoft-IIS/7.5
Vary: X-UA,Accept-Encoding, User-Agent
X-Cache: HIT
X-Cache-Hits: 6327
X-CacheReason: Static-js-css.
X-Powered-By: ASP.NET
X-Served-By: ip-xxx-xxx-xxx-xx.name.xxx
x-stale: true
X-UA-Device: pc
X-Varnish: 993020034 905795837
X-Varnish-beresp-grace: 43200.000
X-Varnish-beresp-status: 200
X-Varnish-beresp-ttl: 604800.000
transfer-encoding: chunked
Connection: keep-alive
Why are some requests not gzipped, when it should, this is my Varnish config (the part relevant for gzip):
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|swf)$") {
# No point in compressing these
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# Enabling GZIP
if (beresp.http.Content-Type ~ "(text/css|application/x-javascript|application/javascript)") {
set beresp.do_gzip = true;
}
if (beresp.http.Content-Encoding ~ "gzip" ) {
if (beresp.http.Content-Length == "0") {
unset beresp.http.Content-Encoding;
}
}
set beresp.http.Vary = regsub(beresp.http.Vary, "(?i)^(.*?)X-Forwarded-URI,?(.*)$", "\1\2");
set beresp.http.Vary = regsub(beresp.http.Vary, "(?i)^(.*?)User-Agent,?(.*)$", "\1\2");
set beresp.http.Vary = regsub(beresp.http.Vary, "^(.*?),?$", "X-UA,\1");
set beresp.http.Vary = regsub(beresp.http.Vary, "^(.*?),?$", "\1");
Any ideas, thank you.
Responses will only be gzipped if the request indicates that it can accept a gzipped response. This is indicated by the Accept-Encoding header in the request. So perhaps your online curl is not sending that header. It may be the same for your clients who are seeing this. You really have customers who are reporting that they are not getting responses gzipped?
Update
Ah, I see what you're doing now. Are you using a recent version of Varnish? There's no need to do all this yourself now. Varnish handles it all natively. All you need to do is set do_gzip to on for the content types where you want it, and Varnish takes care of the rest, including the Accept-Encoding header. See the documentation here.
So just remove all of your gzip/encoding related code except the part directly under # Enabling GZIP:
# Enabling GZIP
if (beresp.http.Content-Type ~ "(text/css|application/x-javascript|application/javascript)") {
set beresp.do_gzip = true;
}
And that will probably get everything working. It works fine for me that way. The best amount of VCL is as little as possible, Varnish is very good at handling things itself. Don't forget to restart Varnish or otherwise clear the cache for this site after making the change.
In case it's useful, I use the following VCL for this:
if (
beresp.status == 200
&& beresp.http.content-type ~ "\b((text/(html|plain|css|javascript|xml|xsl))|(application/(javascript|xml|xhtml\+xml)))\b"
) {
set beresp.do_gzip = true;
}
Which checks for more content types that can benefit from compression, including HTML. I don't bother with application/x-javascript as it's ancient and not used.
On another note, are you sure you need to be modifying the Vary header in the way that you are doing there?

Why does alps (or profile) path in spring-data-rest return json body with non matching header Content-Type: text/html?

Right now the default Content-Type of my spring-data-rest (spring-boot 1.4.3.RELEASE) provided controllers are application/hal+json which makes sense. If I use chrome I get application/hal+json for the root of my application for instance since chrome uses an Accept header of "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8". However, the /profile (formally /alps) URLs provide text/html even though the response body is json (making the Content-Type not match the body). If you specifically ask for only application/json then you get the correct response header.
Here is the incorrectly working case (returns text/html when the document/body returned is NOT text/html):
$ http --verbose "http://localhost:8080/v1/profile/eldEvents" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
GET /v1/profile/eldEvents HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2
HTTP/1.1 200
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Location, X-Auth, Authorization
Access-Control-Allow-Methods: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, PATCH
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Location
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html;charset=UTF-8
Date: Fri, 03 Feb 2017 01:16:14 GMT
Expires: 0
Pragma: no-cache
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
Transfer-Encoding: chunked
X-Application-Context: application
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
{
"alps" : {
"version" : "1.0",
"descriptors" : [ {
"id" : "eldEvent-representation",
"href" : "http://localhost:8080/v1/profile/eldEvents",
"descriptors" : [ {
"name" : "sequenceId",
"type" : "SEMANTIC"
}, {
...
Cut out the rest of the response, you can see from above it is json data.
I believe the correct Content-Type for the above request should be something similar to "application/json".
If this is still relevant for you: I've solved this by manually overriding all requests against /profile/* with no content-type defined.
#Component
public class ProfileContentTypeFilter extends OncePerRequestFilter
{
private static final AntPathMatcher matcher = new AntPathMatcher();
#Override
protected void doFilterInternal (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException
{
if (request.getContentType() == null && matcher.match("/profile/*", request.getRequestURI()))
{
// Override response content type for unspecified requests on profile endpoints
response.setContentType(MediaType.APPLICATION_JSON_VALUE);
}
filterChain.doFilter(request, response);
}
}

How to correctly handle multiple Set-Cookie headers in Hyper?

I'm using Hyper to send HTTP requests, but when multiple cookies are included in the response, Hyper will combine them to one which then fails the parsing procedure.
For example, here's a simple PHP script
<?php
setcookie("hello", "world");
setcookie("foo", "bar");
Response using curl:
$ curl -sLD - http://local.example.com/test.php
HTTP/1.1 200 OK
Date: Sat, 24 Dec 2016 09:24:04 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=world
Set-Cookie: foo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
However for the following Rust code:
let client = Client::new();
let response = client.get("http://local.example.com/test.php")
.send()
.unwrap();
println!("{:?}", response);
for header in response.headers.iter() {
println!("{}: {}", header.name(), header.value_string());
}
...the output will be:
Response { status: Ok, headers: Headers { Date: Sat, 24 Dec 2016 09:31:54 GMT, Server: Apache/2.4.25 (Unix) PHP/7.0.14, X-Powered-By: PHP/7.0.14, Set-Cookie: hello=worldfoo=bar, Content-Length: 0, Content-Type: text/html; charset=UTF-8, }, version: Http11, url: "http://local.example.com/test.php", status_raw: RawStatus(200, "OK"), message: Http11Message { is_proxied: false, method: None, stream: Wrapper { obj: Some(Reading(SizedReader(remaining=0))) } } }
Date: Sat, 24 Dec 2016 09:31:54 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=worldfoo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
This seems to be really weird to me. I used Wireshark to capture the response and there're two Set-Cookie headers in it. I also checked the Hyper documentation but got no clue...
I noticed Hyper internally uses a VecMap<HeaderName, Item> to store the headers. So they concatenate the them to one? Then how should I divide them into individual cookies afterwards?
I think that Hyper prefers to keep the cookies together in order to make it easier do some extra stuff with them, like checking a cryptographic signature with CookieJar (cf. this implementation outline).
Another reason might be to keep the API simple. Headers in Hyper are indexed by type and you can only get a single instance of that type with Headers::get.
In Hyper, you'd usually access a header by using a corresponding type. In this case the type is SetCookie. For example:
if let Some (&SetCookie (ref cookies)) = response.headers.get() {
for cookie in cookies.iter() {
println! ("Got a cookie. Name: {}. Value: {}.", cookie.name, cookie.value);
}
}
Accessing the raw header value of Set-Cookie makes less sense, because then you'll have to reimplement a proper parsing of quotes and cookie attributes (cf. RFC 6265, 4.1).
P.S. Note that in Hyper 10 the cookie is no longer parsed, because the crate that was used for the parsing triggers the openssl dependency hell.