RestSharp infinitely adds User-Agent header until target API returns 400 - Bad Req - Headers too long - restsharp

I recently upgraded RestSharp to 107.3.0 in the hope to fix some connectivity issues (as mentioned in the documentation).
After 1 hour of being live on production with this upgrade, my targetted API started throwing '400 Bad Request - Headers too long'. A recycle was only a temporary fix.
When I use Fiddler to see my API calls (locally), I noticed that the User-agent header contains:
RestSharp/107.3.0.0 RestSharp/107.3.0.0 RestSharp/107.3.0.0 (...)
... and each next call adds the agent name to the growing header value until the header is, well, just too long.
Anyone else experienced this before?

Turns out I was creating a RestClient(httpClient) object each time, but re-using the same HTTP client. Creating an instance of RestClient initializes some default headers, resulting in infinitely adding the user agent header value.
Re-using the RestClient as well, fixed the behavior.

Related

After update of React Admin Simple Rest Data Provider all requests return 416 Range Not Satisfiable

After updating ra-data-simple-rest from 3.3.2 to 3.10.4 every request to our API for lists started to fail with Error 416 Range Not Satisfiable.
Backend did not change and we are still Allowing and returning Content-Range Header as prescribed by ReactAdmin documentation
ie:
headers:
{
'Access-Control-Expose-Headers': 'Content-Range',
'Content-Range': 'myresource 0-50/32'
}
}
I checked the source code of ReactAdmin Data Simple Rest and from my understanding, the change is related to adding another header Range because of some behavior in Chrome (see history of commits)
Since then though, all the queries made via ReactAdmin are failing in the browser due to Error 416 Range Not Satisfieble . Checking with CURL and Postman the request go through and contain every necessary header, properly exposed.
By manually removing that new Range header, just before sending the request with FetchUtils, the problem disappears.
After researching and trying out we found out that the Content-Range approach for pagination used by ReactAdmin was somehow hacky and it is best to use X-Total-Count .
We removed the Header ContentRange entirely, and used XTotalCount instead ( we specified the parameter in the dataProvider/httpclient as documented and the problem was gone.
What i dont understand is why was that happening. I cant see in the code anything that prevents me from still using Content-Range, and unfortunately I did not find any documentation in Changelog or Upgrade (nor Issues) that talk about the problem we had.
So i am wondering if we were doing something wrong in our backend or in the way we use the provider, or the code has a bug which nobody experienced before.

jMeter issue when using Cookie manager and Regular expression extractor

So basically I need to extract an auth token from header response of 1st http request and then use the extracted data in 2nd (and all the following) http requests cookies.
The issue here is, that I have cookie manager set for the whole controller and instead of getting actual data I get the name of variable in my cookie ".authToken=${auth}".
I am guessing the reason is that the variable is not declared when the test reaches Cookie manager, but I would expect jmeter to be smart enough to declare the variable when it gets to the regular expression extractor.
Structure
Thread
Cache Manager
Cookie Manager (Cookie Policy:compatibility; Implementation:HC3)
Controller
Http Request
Regular expression extractor
Http request (I need to use value extracted above in Request Cookie here)
Http request (I need to use the same value in Request Cookie here)
Http request (I need to use the same value in Request Cookie here)
.....
Details:
All the http requests are recorded with implementation HttpClient3.1
Pretty sure I have everything configured correctly as in variable names, regular expression since it works in a very specific case:
The only time it seemed to work correctly was when I had Cookie manager inside the http request and disabled the 'main' Cookie manager (the one for the whole controller). Then it got extracted correctly, but that would be really silly workaround for such a basic requirement and also I have many http requests (over 100) where I need to use the extracted value.
Jmeter doesn't need to use the variable before it's declared by the regular expression extractor, I made sure that the domain is correct and it gets used for the first time after it should have been extracted.
Another workaround I thought of would be having separate threads, have them linked and send the variable in between them, launching the next one once the data gets extracted, but that seems a little bit too drastic.
What I tried:
Splitting http requests into 2 different controllers and using 2 different Cookie managers - got "${auth}" instead of some value
Defining user variable above controller and then using "Apply to: Jmeter Variable" option - again got just string "${auth}" instead of some value.
Moving the Cookie manager to a position after the http request which is used for the extraction - again "${auth}" instead of some value
Setting different cookie's policy (not all of them, but few)
Setting "CookieManager.save.cookies=true" in jmeter.properties (and still have on true)
Any help/ideas are appreciated. I have been trying to figure this out for about an hour and I think I must be missing something very simple.
Alright, finally got this resolved after roughly 2 hours.
Thanks to this article, I was able to do what I needed
https://capacitas.wordpress.com/2013/06/11/thats-the-way-the-cookie-crumbles-jmeter-style-part-2/
In nutshell: You need to use beanshell pre-processor and add the cookie manually
Here is the beanshell script in case the site dies:
import org.apache.jmeter.protocol.http.control.CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
CookieManager manager = sampler.getCookieManager();
Cookie cookie = new Cookie("CookieName", vars.get("YourExtractedVariable"), "Domain", "Path", false, 0);
manager.add(cookie);

Caching Github API calls

I have a general question related to caching of API calls, in this instance calls to the Github API.
Let's say I have a page in my app that shows the filenames of a repo, and the content of the README. This means that I will have to do a few API calls in order to retrieve that.
Now, let's say I want to add something like memcached in between, so I'm not doing these calls over and over, if I don't need to.
How would you normally go about this? If I don't enable a webhook on Github, I have no way of knowing whether the cache should expire. I could always make a single call to get the current sha of HEAD, and if it hadn't changed, use cache instead. But that's on a repo-level, and not on a file level.
I can imagine I could do something like that with the object-sha's, but if I need to call the API anyway to get those, it defeats the purpose of caching.
How would you go about it? I know a service like prose.io has no caching right now, but if it should, what would the approach be?
Thanks
Would just using HTTP caching be good enough for your use case? The purpose of HTTP caching is not just to provide a way of not making requests if you already have a fresh response, rather - it also enables you to quickly validate if the response you already have in cache is valid (without the server sending the complete response again if it is fresh).
Looking at GitHub API responses, I can see that GitHub is correctly setting the relevant HTTP headers (ETag, Last-modified, Cache-control).
So, you just do a GET, e.g. for:
GET https://api.github.com/users/izuzak/repos
and this returns:
200 OK
...
ETag:"df739f00c5053d12ef3c625ad6b0fd08"
Last-Modified:Thu, 14 Feb 2013 22:31:14 GMT
...
Next time - you do a GET for the same resource, but also supply the relevant HTTP caching headers so that it is actually a conditional GET:
GET https://api.github.com/users/izuzak/repos
...
If-Modified-Since:Thu, 14 Feb 2013 22:31:14 GMT
If-None-Match:"df739f00c5053d12ef3c625ad6b0fd08"
...
And lo and behold - the server returns a 304 Not modified response and your HTTP client will pull the response from its cache:
304 Not Modified
So, GitHub API does HTTP caching right and you should use it. Granted, you have to use an HTTP client that supports HTTP caching also. The best thing is that if you get a 304 Not modified response - GitHub does not decrease your remaining API calls quota. See: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#conditional-requests
GitHub API also sets the Cache-Control: private, max-age=60 header, so you have 60 seconds of freshness -- which means that requests for the same resource made less than 60 seconds apart will not even be made to the server.
Your reasoning about using a single conditional GET request to a resource that surely changes if anything in the repo changed (a resource showing the sha of HEAD, for example) sounds reasonable -- since if that resource hasn't changed, then you don't have to check the individual files since they haven't surely changed.

Objective-C iOS NSURLConnection statusCode 400 No body returned

not really sure where to start here other then to dive into CF (I REALLY don't want to do that) but....
I have an NSURLConnection signing OAuth2 requests to an ASP.NET WebAPI Resource Server, this resource server returns JSON body AND statusCode 400. I have yet to find a way to parse the data from the response when it returns code 400.
Does anyone here have any ideas? I would prefer to keep using NSURLConnection as this is only an OAuth2 consumer class. My other code is using restkit, but I do not want the OAuth2 end to require said library.
The process to parse data from a request which returns status 400 should be identical to that of a request returning status 200.
Simply note the status code in -connection:didReceiveResponse: and allow the request to continue; you will receive any additional data that the server sends in -connection:didReceiveData: as usual. Finally, you'll get a -connectionDidFinishLoading: call when all data has been received, and you can parse the JSON there.
Does your HTTP request Accept header specify "application/json"? If so, then this is probably an IIS bug and not iOS.
Interestingly enough, MVC4 ActionResult is broken in the RTM. Switching it over to Pure WebApi and fine tuning the response, I was able to finesse it into returning the proper data, it was likely serializing the json improperly which other languages weren't catching.

WebAPI hangs indefinitely when receiving a POST with incorrect Content-Length in header

I have a project set up using ASP.NET WebAPI on top of Azure, and am having a problem whenever I try to make an HTTP Post where the content-length is too long in the header.
Normally I would've just ignored this problem, because you should be correctly setting the content-length on POST, but it turns out that when this happens, it causes the session to hang indefinitely, and then the Azure emulator crashes.
I have a custom JSON Formatter which extends MediaTypeFormatter, and I set a breakpoint on the first line of my implementation of OnReadFromStreamAsync(). However, the breakpoint is never hit because the hangup happens before ever hitting the JSON Deserializer.
I really have no idea where this hanging is occurring from because I receive no exception, just an indefinite hang and occasional Azure emulator crash.
Thank you in advance for any help or insight you might provide!
This sounds like a bug. The good thing is that you can get updated developer bits form codeplex.
There is a chance what your experiencing is related to one of these:
WebAPI: Stream uploading under webhost is not working
DevDiv 388456 -- WebHost should not use Transfer-Encoding chunked when
content length is known.
Zero ContentLength without content type header in body is throwing
If the updated bits don't fix your problem I suggest you try the standard media formatters to rule in/out your formatter. Failing that, then submit an issue.