We are developing a private app for use with our Shopify store. To ensure we don't cross the API limits, we've implemented a basic configurable delay per API call.
We started with the documented API limit of 500 calls every 5 minutes, which mapped to a delay of 600ms per call. However, after 50 calls the server doesn't respond to the HTTP GET.
Even after we increased the delay to 1200ms per API call, it still fails after 50 calls.
We are using the Shopify4J on a store that is in a trial period (myfirststore-3).
I've looked at the wiki, api docs, forums and SO - but there is no mention of any other limit except the official 500/5min limit.
Are we running into a different call limit for private apps or trial stores ?
It seems the problem is in the java client implementation itself. We figured this out by adding all initializations inside our loop.
After making that change, we were able to make up to 500 API call per 5 minutes as documented.
We had added the apache httpclient library to the Shopify4J package to make it work on our backend servers, and it probably needs some tweaking. Ofcourse this is not a long term solution to our problem but it does answer this question.
Once we figure out the problem in our code, will post a comment here.
Related
I need to get some data from a REST API in my GraphQL API. For that I'm extending RESTDataSource from apollo-datasource-rest.
From what I understood, RESTDataSource caches automatically requests but I'd like to verify if it is indeed cached. Is there a way to know if my request is getting its data from the cache or if it's hitting the REST API?
I noticed that the first request takes some time, but the following ones are way faster and also, the didReceiveResponse method is not called everytime I make a query. Is it because the data is loaded from the cache?
I'm using apollo-server-express.
Thanks for your help!
You can time the requests like following:
console.time('restdatasource get req')
this.get(url)
console.timeEnd('restdatasource get req')
Now, if the time is under 100-150 milliseconds, that should be a request coming from the cache.
You can monitor the console, under the network tab. You will be able to see what endpoints the application is calling. If it uses cached data, there will be no new request to your endpoint logged
If you are trying to verify this locally, one good option is to setup a local proxy so that you can see all the network calls being made. (no network call meaning the call was read from cache) Then you can simply configure your app using this apollo documentation to forward all outgoing calls through a proxy like mitmproxy.
I've been trying to make use of service.getNavigation() method, but apparently the Request URI is too long which causes this error:
Request-URI Too Long
The requested URL's length exceeds the capacity limit for this server.
Is there a spartacus config that can resolve this issue?
Or is this supposed to be handled in the cloud (ccv2) config?
Not sure which service are you talking about specifically and what data are you passing there. For starters, please read this: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/414
Additionally it would benefit everyone if you could say something about the service you're using and the data you are trying to pass/get.
The navigation component is firing a request for all componentIds. If you have a navigation with a lot of (root?) elements, the maximum length of HTTP GET request might be too long for the given client or server.
The initial implementation of loading components was actually done by a POST request, but the impression was that we would not need to support requests with so many components. I guess we were wrong.
Luckily, the legacy POST based request is still in the code base, it's OccCmsComponentAdapter.findComponentsByIdsLegacy.
The easiest way for you to use this code, is to provide a CustomOccCmsComponentAdapter, that extends from OccCmsComponentAdapter. Then you can override the findComponentsByIds method and simply call the super.findComponentsByIdsLegacy and pass in a copy of the arguments.
A more cleaner way would be to override the CmsComponentConnector and directly delegate the load to the adapter.findComponentsByIdsLegacy. I would not start here, as it's more complicated. Do a POC with the first suggested approach.
I am using karate 0.9.0. I need to limit the request hits to 5 per second in my test suite due to threshold limit at gateway. is it possible in karate or not? If yes, how?
Here is the suggestion - configure headers as a JavaScript function. Within the function body - use a Java singleton (and a static method) to track how many requests have been sent and how much "sleep" needs to be added to maintain the required throttling / threshold.
You will need some Java skills to do this, all the best. The documentation has details on how to call Java code.
I got into work today and got a telling off from my boss because the company website wasn't loading properly on mobile. I looked in the console and saw the following error... Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. For more help, check https://xhr.spec.whatwg.org/.. I clicked the link https://xhr.spec.whatwg.org/#xmlhttprequest and the update was made yesterday (7th June 2016). How do I fix this error?
Usually the message you provided should not break a page from being rendered.
It just states that you should not use Synchronous XMLHttpRequest.
Synchronous requests will slow down rendering since the site has to wait for each request to complete in an iterative manner.
To "fix" that all your AJAX requests must be set to be async.
I've implemented a controller method which makes a couple of requests to an third parry API, which is quite slow. Further I've utilized one of Thin's asynchronous features:
# This informs thin that the request will be handled asynchronously
self.response_body = ''
self.status = -1
Thread.new do
# This will be the response to the client
env['async.callback'].call('200', {}, "Response body")
end
(blog post about it)
However I'm curious if this could be implemented without using Thin, or to be more precise if that could be accomplished with Apache/Phusionpassenger.
Any suggestions, pointers, links, comments or answers are appreciated. Thanks
Not sure wether this is possible now with passenger 4. In this article They announced having made a complete
redesign to support the evented model. As they also do have plans to support Node.js, I would expect the above method to work.
However if you look at this post from them, they clearly say:
... There is another way to support high I/O concurrency though: multi-threading ...
And so this leaves multithreaded servers as the only serious options for handling streaming support in Rails apps....
Rails is just not designed for the evented process model, but it supports the multi-threaded model quite well. And multithreaded sutup can be achieved with passenger-enterprise.
Another option might be to extract this problem to another application (see Railscast).
So for example instead of directly calling a 3d party api in you controller, which will spend the most time on blocking the I/O call, you solve this by processing this request in a backround job. The user will get an immidiate reponse and after that directly subscribe on some faye message channel. In your background job, when the 3d party call is ready, you publish the response on this channel to faye.
PROFIT.