How to correctly handle multiple Set-Cookie headers in Hyper? - http-headers

I'm using Hyper to send HTTP requests, but when multiple cookies are included in the response, Hyper will combine them to one which then fails the parsing procedure.
For example, here's a simple PHP script
<?php
setcookie("hello", "world");
setcookie("foo", "bar");
Response using curl:
$ curl -sLD - http://local.example.com/test.php
HTTP/1.1 200 OK
Date: Sat, 24 Dec 2016 09:24:04 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=world
Set-Cookie: foo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
However for the following Rust code:
let client = Client::new();
let response = client.get("http://local.example.com/test.php")
.send()
.unwrap();
println!("{:?}", response);
for header in response.headers.iter() {
println!("{}: {}", header.name(), header.value_string());
}
...the output will be:
Response { status: Ok, headers: Headers { Date: Sat, 24 Dec 2016 09:31:54 GMT, Server: Apache/2.4.25 (Unix) PHP/7.0.14, X-Powered-By: PHP/7.0.14, Set-Cookie: hello=worldfoo=bar, Content-Length: 0, Content-Type: text/html; charset=UTF-8, }, version: Http11, url: "http://local.example.com/test.php", status_raw: RawStatus(200, "OK"), message: Http11Message { is_proxied: false, method: None, stream: Wrapper { obj: Some(Reading(SizedReader(remaining=0))) } } }
Date: Sat, 24 Dec 2016 09:31:54 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=worldfoo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
This seems to be really weird to me. I used Wireshark to capture the response and there're two Set-Cookie headers in it. I also checked the Hyper documentation but got no clue...
I noticed Hyper internally uses a VecMap<HeaderName, Item> to store the headers. So they concatenate the them to one? Then how should I divide them into individual cookies afterwards?

I think that Hyper prefers to keep the cookies together in order to make it easier do some extra stuff with them, like checking a cryptographic signature with CookieJar (cf. this implementation outline).
Another reason might be to keep the API simple. Headers in Hyper are indexed by type and you can only get a single instance of that type with Headers::get.
In Hyper, you'd usually access a header by using a corresponding type. In this case the type is SetCookie. For example:
if let Some (&SetCookie (ref cookies)) = response.headers.get() {
for cookie in cookies.iter() {
println! ("Got a cookie. Name: {}. Value: {}.", cookie.name, cookie.value);
}
}
Accessing the raw header value of Set-Cookie makes less sense, because then you'll have to reimplement a proper parsing of quotes and cookie attributes (cf. RFC 6265, 4.1).
P.S. Note that in Hyper 10 the cookie is no longer parsed, because the crate that was used for the parsing triggers the openssl dependency hell.

Related

Google safe browsing API not returning threat URLs

I'm sending requests to the Google safe browsing API. I believe I'm following their documentation correctly. I've tried regenerating my key.
I'm sending the request below
POST https://safebrowsing.googleapis.com/v4/threatMatches:find?key=AIxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1
User-Agent: Fiddler
Host: safebrowsing.googleapis.com
Content-Length: 511
{
"client": {
"clientId": "yourcompanyname",
"clientVersion": "1.5.2"
},
"threatInfo": {
"threatTypes": ["MALWARE", "SOCIAL_ENGINEERING"],
"platformTypes": ["WINDOWS"],
"threatEntryTypes": ["URL"],
"threatEntries": [
{"url": "http://www.urltocheck1.org/"},
{"url": "http://malware.testing.google.test"},
{"url": "http://www.urltocheck2.org/"},
{"url": "http://www.urltocheck3.com/"}
]
}
}
And getting back an empty response which is not what I'm expecting with the URLs supplied and following their example.
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Date: Wed, 08 Sep 2021 15:05:59 GMT
Server: scaffolding on HTTPServer2
Cache-Control: private
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Accept-Ranges: none
Vary: Accept-Encoding
Content-Length: 3
{}
https://transparencyreport.google.com/safe-browsing/search?url=malware.testing.google.test
https://developers.google.com/safe-browsing/v4/lookup-api
You need to pass API key
You need to pass MALWARE url": "http://www.urltocheck1.org/"
if it is not malware it will show empty. try the following url
https://testsafebrowsing.appspot.com/s/malware.html with your code. please search and test with other maleware site

I get "kotlinx/coroutines/io/ByteReadChannel" when I try to decode a json on server side with ktor and moshi

In Application.kt I install moshi in Application.module
install(ContentNegotiation){
moshi()
}
I declared a simple test calss and in the route I try to decode the test class:
data class Test(val testString: String)
fun Route.test() {
post (TEST_ENDPOINT) {
val testReceive = call.receive<Test>()
call.respond(testReceive)
}
The request is post with the following Headers and Body:
Accept: */*
Accept-Encoding: gzip, deflate
Content-Type: application/json
Accept-Language: en-gb
{
"testString": "dasdada"
}
Response headers:
HTTP/1.1 500 Internal Server Error
Date: Mon, 05 Oct 2020 11:55:58 GMT
Content-Length: 37
Connection: keep-alive
Content-Type: text/plain; charset=UTF-8
Server: ktor-server-core/1.4.1 ktor-server-core/1.4.1
Response body:
kotlinx/coroutines/io/ByteReadChannel
Any suggestion or comment is appreciated.
Moshi 1.0.1 relies on some outdated Ktor API.
Consider either moving back to Ktor 1.3.2 or (better) use another JSON handler (there are several available out of the box: Gson, Jackson and kotlinx.serialization)

CSS is not always gzipped why?

In my Firefox or Chrome if I check the HTTP header the result are always with Content-Encoding: gzip. But I have customers reporting that they see "transfer-encoding: chunked" instead and the request are not gzipped.
http://www.example.com/public/css/style.min.css
If I or the customer do a gzip compression online check it's confirmed gzip is active.
https://checkgzipcompression.com = gzip!
But if I use a checker like this one. http://onlinecurl.com/
I also get the transfer-encoding: chunked
Request:
GET /style/css.css HTTP/1.1
Host: www.example.com
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: ...
Accept: /
Referer: http://www.example.com/
Accept-Encoding: gzip, deflate
Accept-Language: ...
Cookie: ...
Response:
HTTP/1.1 200 OK
Age: 532948
cache-control: public, max-age=604800
Content-Type: text/css
Date: Wed, 28 Jun 2017 12:35:07 GMT
ETag: "5349e8d595dfd21:0"
Last-Modified: Wed, 07 Jun 2017 13:56:17 GMT
Server: Microsoft-IIS/7.5
Vary: X-UA,Accept-Encoding, User-Agent
X-Cache: HIT
X-Cache-Hits: 6327
X-CacheReason: Static-js-css.
X-Powered-By: ASP.NET
X-Served-By: ip-xxx-xxx-xxx-xx.name.xxx
x-stale: true
X-UA-Device: pc
X-Varnish: 993020034 905795837
X-Varnish-beresp-grace: 43200.000
X-Varnish-beresp-status: 200
X-Varnish-beresp-ttl: 604800.000
transfer-encoding: chunked
Connection: keep-alive
Why are some requests not gzipped, when it should, this is my Varnish config (the part relevant for gzip):
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|swf)$") {
# No point in compressing these
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# Enabling GZIP
if (beresp.http.Content-Type ~ "(text/css|application/x-javascript|application/javascript)") {
set beresp.do_gzip = true;
}
if (beresp.http.Content-Encoding ~ "gzip" ) {
if (beresp.http.Content-Length == "0") {
unset beresp.http.Content-Encoding;
}
}
set beresp.http.Vary = regsub(beresp.http.Vary, "(?i)^(.*?)X-Forwarded-URI,?(.*)$", "\1\2");
set beresp.http.Vary = regsub(beresp.http.Vary, "(?i)^(.*?)User-Agent,?(.*)$", "\1\2");
set beresp.http.Vary = regsub(beresp.http.Vary, "^(.*?),?$", "X-UA,\1");
set beresp.http.Vary = regsub(beresp.http.Vary, "^(.*?),?$", "\1");
Any ideas, thank you.
Responses will only be gzipped if the request indicates that it can accept a gzipped response. This is indicated by the Accept-Encoding header in the request. So perhaps your online curl is not sending that header. It may be the same for your clients who are seeing this. You really have customers who are reporting that they are not getting responses gzipped?
Update
Ah, I see what you're doing now. Are you using a recent version of Varnish? There's no need to do all this yourself now. Varnish handles it all natively. All you need to do is set do_gzip to on for the content types where you want it, and Varnish takes care of the rest, including the Accept-Encoding header. See the documentation here.
So just remove all of your gzip/encoding related code except the part directly under # Enabling GZIP:
# Enabling GZIP
if (beresp.http.Content-Type ~ "(text/css|application/x-javascript|application/javascript)") {
set beresp.do_gzip = true;
}
And that will probably get everything working. It works fine for me that way. The best amount of VCL is as little as possible, Varnish is very good at handling things itself. Don't forget to restart Varnish or otherwise clear the cache for this site after making the change.
In case it's useful, I use the following VCL for this:
if (
beresp.status == 200
&& beresp.http.content-type ~ "\b((text/(html|plain|css|javascript|xml|xsl))|(application/(javascript|xml|xhtml\+xml)))\b"
) {
set beresp.do_gzip = true;
}
Which checks for more content types that can benefit from compression, including HTML. I don't bother with application/x-javascript as it's ancient and not used.
On another note, are you sure you need to be modifying the Vary header in the way that you are doing there?

backbone.js fetch is not updating the model

I have read other posts here and it looks like most of the time this boils down to fetch being async. I don't think that is my problem because 1. I test the results in the success callback of fetch and 2. I can console.log(model.toJSON()) in the js console later and it still is not updated
Notes: I am getting a good json response from the API and I can get the data by putting 'parse' in my model declaration like so
parse: function(data){
alert(data.screenname);
}
Here is my code, why is the model not being updated with the fetch call
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
<script src="vendor/components/underscore/underscore-min.js"></script>
<script src="vendor/components/backbone/backbone-min.js"></script>
</head>
<body>
Hello
<script>
$.ajaxSetup({
beforeSend: function(xhr){
xhr.setRequestHeader("Authorization", "Basic " + window.btoa("username" + ":" + "password"));
}
});
var User=Backbone.Model.extend({
parse: function(data){
alert(data.screenname);
},
urlRoot: 'http://api.myapi.com/user'
});
var user=new User({id:'1'});
user.fetch({
success: function(collection, response, options){
console.log(response);
console.log(user.toJSON());
}
});
</script>
</body>
</html>
When I log response, it show a good json coming back, but user.toJSON just shows the id as 1.
I can use parse in the model declaration to manually assign each value in the model from the response, but that seems like a dumb way to do it. I was under the impression that fetch() was supposed to populate the model with the result from the server.
**UPDATED**
Here is the response I get back from the server
{"id":1,"email":"test#email.com","password":"pass","screenname":"myname","id_zipcode":1,"id_city":1,"date_created":"2014-12-25 12:12:12"}
Here are the response headers from my api
HTTP/1.1 200 OK
Date: Tue, 04 Feb 2014 18:31:41 GMT
Server: Apache/2.2.24 (Unix) DAV/2 PHP/5.5.6 mod_ssl/2.2.24 OpenSSL/0.9.8y
X-Powered-By: PHP/5.5.6
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Authorization, Origin, X-Requested-With, Content-Type, Accept
Set-Cookie: PHPSESSID=pj1hm0c2ubgaerht3i5losga4; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Length: 139
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
Content-Type: application/json; charset=utf-8
You have overridden your parse() method to effectively do nothing. It should return all the attributes to set on your model; you have not returned anything, hence, nothing was being set on the model.
It should look like this.
var User=Backbone.Model.extend({
parse: function(data){
alert(data.screenname);
return data; //all attributes in data will be set on the model
},
urlRoot: 'http://api.myapi.com/user'
});

Why does RestSharp throw an error when deserializing a boolean response?

When I make a request in RestSharp like so:
var response = client.Execute<bool>(request);
I get the following error:
"Unable to cast object of type 'System.Boolean' to type 'System.Collections.Generic.IDictionary`2[System.String,System.Object]'."
This is complete HTTP response, per Fiddler:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Mon, 01 Apr 2013 15:09:14 GMT
Content-Length: 5
false
It appears that everything is kosher with the response, so what gives?
Also, if I'm doing something stupid with my WebAPI Controller by returning a simple value instead of an object and that would fix my problem, feel free to suggest.
RestSharp will only deserialise valid json. false is not valid json (according to RFC-4627). The server will need to return something like the following at the least:
{ "foo": false }
And you'll need a class like to following to deserialize to:
public class BooleanResponse
{
public bool Foo { get; set; }
}