After update of React Admin Simple Rest Data Provider all requests return 416 Range Not Satisfiable - react-admin

After updating ra-data-simple-rest from 3.3.2 to 3.10.4 every request to our API for lists started to fail with Error 416 Range Not Satisfiable.
Backend did not change and we are still Allowing and returning Content-Range Header as prescribed by ReactAdmin documentation
ie:
headers:
{
'Access-Control-Expose-Headers': 'Content-Range',
'Content-Range': 'myresource 0-50/32'
}
}
I checked the source code of ReactAdmin Data Simple Rest and from my understanding, the change is related to adding another header Range because of some behavior in Chrome (see history of commits)
Since then though, all the queries made via ReactAdmin are failing in the browser due to Error 416 Range Not Satisfieble . Checking with CURL and Postman the request go through and contain every necessary header, properly exposed.
By manually removing that new Range header, just before sending the request with FetchUtils, the problem disappears.
After researching and trying out we found out that the Content-Range approach for pagination used by ReactAdmin was somehow hacky and it is best to use X-Total-Count .
We removed the Header ContentRange entirely, and used XTotalCount instead ( we specified the parameter in the dataProvider/httpclient as documented and the problem was gone.
What i dont understand is why was that happening. I cant see in the code anything that prevents me from still using Content-Range, and unfortunately I did not find any documentation in Changelog or Upgrade (nor Issues) that talk about the problem we had.
So i am wondering if we were doing something wrong in our backend or in the way we use the provider, or the code has a bug which nobody experienced before.

Related

RestSharp infinitely adds User-Agent header until target API returns 400 - Bad Req - Headers too long

I recently upgraded RestSharp to 107.3.0 in the hope to fix some connectivity issues (as mentioned in the documentation).
After 1 hour of being live on production with this upgrade, my targetted API started throwing '400 Bad Request - Headers too long'. A recycle was only a temporary fix.
When I use Fiddler to see my API calls (locally), I noticed that the User-agent header contains:
RestSharp/107.3.0.0 RestSharp/107.3.0.0 RestSharp/107.3.0.0 (...)
... and each next call adds the agent name to the growing header value until the header is, well, just too long.
Anyone else experienced this before?
Turns out I was creating a RestClient(httpClient) object each time, but re-using the same HTTP client. Creating an instance of RestClient initializes some default headers, resulting in infinitely adding the user agent header value.
Re-using the RestClient as well, fixed the behavior.

Request URI too long on spartacus services

I've been trying to make use of service.getNavigation() method, but apparently the Request URI is too long which causes this error:
Request-URI Too Long
The requested URL's length exceeds the capacity limit for this server.
Is there a spartacus config that can resolve this issue?
Or is this supposed to be handled in the cloud (ccv2) config?
Not sure which service are you talking about specifically and what data are you passing there. For starters, please read this: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/414
Additionally it would benefit everyone if you could say something about the service you're using and the data you are trying to pass/get.
The navigation component is firing a request for all componentIds. If you have a navigation with a lot of (root?) elements, the maximum length of HTTP GET request might be too long for the given client or server.
The initial implementation of loading components was actually done by a POST request, but the impression was that we would not need to support requests with so many components. I guess we were wrong.
Luckily, the legacy POST based request is still in the code base, it's OccCmsComponentAdapter.findComponentsByIdsLegacy.
The easiest way for you to use this code, is to provide a CustomOccCmsComponentAdapter, that extends from OccCmsComponentAdapter. Then you can override the findComponentsByIds method and simply call the super.findComponentsByIdsLegacy and pass in a copy of the arguments.
A more cleaner way would be to override the CmsComponentConnector and directly delegate the load to the adapter.findComponentsByIdsLegacy. I would not start here, as it's more complicated. Do a POC with the first suggested approach.

Not able to use string over 260 characters as a segment of URL in .NET Core

I'm making a request that works great and acts as supposed to. The actual authorization is provided using headers and working as expected too. This is the URL of it.
https://localhost:44385/api/security/check
By coincidence, I happened to replace the verbatim string check with the actual token, so the URL changed to
https://localhost:44385/api/security/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ...
All in all, the token happens to be 475 characters long. Then, when executing that call, I get the error message as follows.
Error: connect ECONNREFUSED 127.0.0.1:44300
I don't understand the issue and the status code 400 tells me only that the request is bad. Is it purely due ot the length of the URL? It seems like a bit too short (although there is a limitation for that but we're talking about a few thousands characters)...
The signature of the receiving method in the controller looks like this. It resides in the controller with path Security.
[HttpHead("{check}"), Authorize]
public IActionResult IsAuthorized(string check) { ... }
I also tried GET instead of HEAD with the same result. It's difficult to learn more about the error based on 400 Bad request only. It's a bit like something went wrong somewhere kind of error.
After some experimenting, I can confirm that it's not the length of the URL as such but rather the length of the segment between slashes. The first request works, the other does too but the third doesn't. The xxx part is precisely 260 characters and **yyy* part is precisely 261.
https://localhost:44385/api/test/xxx
https://localhost:44385/api/testtest/xxx
https://localhost:44385/api/test/yyy
What is this about?! It's like string in a method in my WebAPI can't be longer than 260 characters. Not 256, which at least would make some kind of sense...
Googling gave a veeery wide range of vastly spread hits and gave me nothing that I could relate to. Postman provides pretty much the same, limited information. The browser's network tab give even less.
A bit confused how to get to know more, how to diagnose it further and/or what to google for. Since it's a non-problem for the production environment, I can't bother my colleagues - the question is purely academic.
The limit you're hitting is UrlSegmentMaxLength (260).
This is all the way down in Http.Sys and only configurable in the
registry.
https://support.microsoft.com/en-us/help/820129/http-sys-registry-settings-for-windows
Workaround: break it up into multiple path segments, or move it to the
query or body. Or use Kestrel without IIS.
Resource: https://github.com/aspnet/AspNetCore/issues/2823#issuecomment-360921436
Here's a related post:
Setting UrlSegmentMaxLength from commadline

drupal_http_get_header() not returning headers

Using Drupal 7-x, I'm trying to retrieve a custom header (in my local it was added via httpd.conf), and on production servers added by varnish (X-Country-Code).
when outputting drupal_http_get_header(), all I get in the returned array is x-ip-whitelist as an array value (nothing else; no status, connection, x-generator, nada).
Any ideas? Wondering if my custom module is being fired too soon before the response is created, which might explain (in my limited understanding) as to why there are no headers available?
If the call to drupal_http_get_header() is firing 'too early' to pick up headers, is there a way to rank my module so it gets called later? This might screw up other stuff my module is using however.

How to poll for updates with JSONP?

I have a Web server that updates its data once per minute, and want to make that data available to clients of all types. In order to reduce bandwidth, I set up the PHP script to support conditional GETs, using IF-MODIFIED-SINCE and/or IF-NONE-MATCH. The idea is that clients can poll every 30 seconds and thereby be sure that they won't miss anything, but also won't get duplicate data.
That all works great for most types of clients, and I've verified that it works with clients that support the standard HTTP conditional GET semantics.
But it doesn't work with JavaScript because JSONP inserts a <script> tag into the DOM and lets the browser handle things--and there's no support (at least, none that I know of) for conditional GETs in <script> tags.
So I modified my PHP script to support passing an etag value. The returned data contains an etag value that's unique for that minute. When the JavaScript client receives data from the server, it saves the etag value so it can use that value in subsequent requests. The request takes the form:
http://api.mydomain.com/script.php?fmt=json&callback=jscallback&etag=ab79bc65e
If the etag of the data doesn't match the passed etag, then I send the new data.
This all works well and was surprisingly easy to code up using jQuery. My dilemma, though is what to do if the etag matches. I see two choices:
Return an HTTP 304 (Not Modified)
Return an HTTP 200 (OK), but with the returned data containing just the header information (modified date, etag, etc.) and no actual data items.
If I do the first, then the JavaScript client code is greatly simplified. The browser seems to work just fine if it gets a 304 response to an injected <script> tag. But ... something bothers me about this solution. I don't know what it is, but it seems like I'm depending on behavior that could be browser-specific. Some browser might decide to report an error if it gets a 304.
Doing the second would require a little bit more work on the server, slightly more bandwidth, and would require the clients to check the data to see if the data was updated. It's more work for everybody, but it seems cleaner.
So, to my question. If you were writing a JavaScript client to get this data, which would you prefer? A silent failure that never calls your "success" callback? Or a "success" return that has no data (beyond status) in it? A third option?
Absent any discussion from others, I went with my gut here and implemented the second option. The web server returns an HTTP 200, and the data contains a "Not Modified" status code along with header information, but no records. That makes the JavaScript just slightly more complicated, but prevents me from depending on undocumented behavior.