RESTDataSource - How to know if response comes from get request or cache - express

I need to get some data from a REST API in my GraphQL API. For that I'm extending RESTDataSource from apollo-datasource-rest.
From what I understood, RESTDataSource caches automatically requests but I'd like to verify if it is indeed cached. Is there a way to know if my request is getting its data from the cache or if it's hitting the REST API?
I noticed that the first request takes some time, but the following ones are way faster and also, the didReceiveResponse method is not called everytime I make a query. Is it because the data is loaded from the cache?
I'm using apollo-server-express.
Thanks for your help!

You can time the requests like following:
console.time('restdatasource get req')
this.get(url)
console.timeEnd('restdatasource get req')
Now, if the time is under 100-150 milliseconds, that should be a request coming from the cache.

You can monitor the console, under the network tab. You will be able to see what endpoints the application is calling. If it uses cached data, there will be no new request to your endpoint logged

If you are trying to verify this locally, one good option is to setup a local proxy so that you can see all the network calls being made. (no network call meaning the call was read from cache) Then you can simply configure your app using this apollo documentation to forward all outgoing calls through a proxy like mitmproxy.

Related

Request URI too long on spartacus services

I've been trying to make use of service.getNavigation() method, but apparently the Request URI is too long which causes this error:
Request-URI Too Long
The requested URL's length exceeds the capacity limit for this server.
Is there a spartacus config that can resolve this issue?
Or is this supposed to be handled in the cloud (ccv2) config?
Not sure which service are you talking about specifically and what data are you passing there. For starters, please read this: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/414
Additionally it would benefit everyone if you could say something about the service you're using and the data you are trying to pass/get.
The navigation component is firing a request for all componentIds. If you have a navigation with a lot of (root?) elements, the maximum length of HTTP GET request might be too long for the given client or server.
The initial implementation of loading components was actually done by a POST request, but the impression was that we would not need to support requests with so many components. I guess we were wrong.
Luckily, the legacy POST based request is still in the code base, it's OccCmsComponentAdapter.findComponentsByIdsLegacy.
The easiest way for you to use this code, is to provide a CustomOccCmsComponentAdapter, that extends from OccCmsComponentAdapter. Then you can override the findComponentsByIds method and simply call the super.findComponentsByIdsLegacy and pass in a copy of the arguments.
A more cleaner way would be to override the CmsComponentConnector and directly delegate the load to the adapter.findComponentsByIdsLegacy. I would not start here, as it's more complicated. Do a POC with the first suggested approach.

Securing API keys in Nuxtjs

I've researched and found three different possibilities to solving my case: I'd like to make an async API call (using dotenv variables to store the credentials) and commit the returned data to Vuex on app init --keeping the creds secure.
Currently I'm attempting using serverMiddleware, but I'm having trouble accessing the context. Is this possible? Currently just getting a "store is not defined" error.
Also, after researching, I keep seeing that it's not a good idea to use regular middleware, as running any code on the client-side exposes the env variable... But I'm confused. Doesn't if (!process.client) { ... } take care of this? Or am I missing the bigger picture.
Additionally, if it does turn out to be okay to use middleware to secure the credentials, would using the separate-env-module be wise to make doubly sure that nothing gets leaked client-side?
Thanks, I'm looking forward to understanding this more thoroughly.
You can use serverMiddleware.
You can do it like this:
client -> call serverMiddleware -> servermiddleware calls API.
that way API key is not in client but remains on the server.
Example:
remote api is: https://maps.google.com/api/something
your api: https://awesome.herokuapp.com
since your own api has access to environment variables and you don't want the api key to be included in the generated client-side build, you create a serverMiddleware that will proxy the request for you.
So that in the end, your client will just make a call to https://awesome.herokuapp.com/api/maps, but that endpoint will just call https://maps.google.com/api/something?apikey=123456 and return the response back to you

Proper way to cache data from API call with nodejs

I am using node.js to write a web service, it calls an API for some data but I am limited by the API to a number of calls per month, so I wish to cache the data I retrieve from the API so I can serve it up with the cached data, and re-fetch the data from the API at a timed interval.
Is this a good approach for this problem? And what caching framework should I use? I looked at node-redis but I don't think a key value store is appropriate for the data.
Thanks!
I would disagree with you regarding Redis. Redis is a very powerful key-value store that can easily be used for what you want. It is designed to have stuff dumped in it and taken out again. In your situation, you can easy cache the API response by saving it into Redis with the query as the key (if this is a REST API you're calling, you could just use the URL or serialized data as the key) and simply cache the response as a stringified JSON object (or XML string if you happen to be getting that).
You can also set an expiry on the cached data, and it will be cleared when the time is expired.
You could then wrap your API call in a helper function which checks the cache, and returns the value if it's present. If it's not it makes the API request, adds it to the cache, then returns it.
This is probably the most straightforward solution and seems to cover your use case pretty well.

Caching Github API calls

I have a general question related to caching of API calls, in this instance calls to the Github API.
Let's say I have a page in my app that shows the filenames of a repo, and the content of the README. This means that I will have to do a few API calls in order to retrieve that.
Now, let's say I want to add something like memcached in between, so I'm not doing these calls over and over, if I don't need to.
How would you normally go about this? If I don't enable a webhook on Github, I have no way of knowing whether the cache should expire. I could always make a single call to get the current sha of HEAD, and if it hadn't changed, use cache instead. But that's on a repo-level, and not on a file level.
I can imagine I could do something like that with the object-sha's, but if I need to call the API anyway to get those, it defeats the purpose of caching.
How would you go about it? I know a service like prose.io has no caching right now, but if it should, what would the approach be?
Thanks
Would just using HTTP caching be good enough for your use case? The purpose of HTTP caching is not just to provide a way of not making requests if you already have a fresh response, rather - it also enables you to quickly validate if the response you already have in cache is valid (without the server sending the complete response again if it is fresh).
Looking at GitHub API responses, I can see that GitHub is correctly setting the relevant HTTP headers (ETag, Last-modified, Cache-control).
So, you just do a GET, e.g. for:
GET https://api.github.com/users/izuzak/repos
and this returns:
200 OK
...
ETag:"df739f00c5053d12ef3c625ad6b0fd08"
Last-Modified:Thu, 14 Feb 2013 22:31:14 GMT
...
Next time - you do a GET for the same resource, but also supply the relevant HTTP caching headers so that it is actually a conditional GET:
GET https://api.github.com/users/izuzak/repos
...
If-Modified-Since:Thu, 14 Feb 2013 22:31:14 GMT
If-None-Match:"df739f00c5053d12ef3c625ad6b0fd08"
...
And lo and behold - the server returns a 304 Not modified response and your HTTP client will pull the response from its cache:
304 Not Modified
So, GitHub API does HTTP caching right and you should use it. Granted, you have to use an HTTP client that supports HTTP caching also. The best thing is that if you get a 304 Not modified response - GitHub does not decrease your remaining API calls quota. See: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#conditional-requests
GitHub API also sets the Cache-Control: private, max-age=60 header, so you have 60 seconds of freshness -- which means that requests for the same resource made less than 60 seconds apart will not even be made to the server.
Your reasoning about using a single conditional GET request to a resource that surely changes if anything in the repo changed (a resource showing the sha of HEAD, for example) sounds reasonable -- since if that resource hasn't changed, then you don't have to check the individual files since they haven't surely changed.

is possible to keep session alive using NSURLConnection doing different requests?

I am using NSRULConnection to make http request on my iphone application. All works just fine.
The problem is after logged in I need to keep the same session to get data from the server.
I read a few posts saying all I need was using the same instance of NSURLConnection and it would use the same session... if that is true, that doesn't make sense to me, cause the NSURLConnection is not mutable and there is no method to change the request since I have to access different pages.
Is there anyway simple way to keep a session using NSURLConnection.
If you are managing sessions using cookies, there is no need to do anything special to achieve session management.The URL loading system automatically sends any stored cookies appropriate for an NSURLRequest. unless the request specifies not to send cookies. So, your sessions should be managed automatically for you.
However, as the Apple's doc says, if someone has set the cookie-acceptance policy to reject all cookies or only accept cookies selectively, you might be in a fix (you can change the cookie acceptance policy yourself too). In such a case, you might resort to URL based session-management; in which you append a session-identifier to the URL as a parameter (You can get this identifier as a part of the successful log-in response), which can be extracted on the server-side. This, however, is considered really bad practice.
Another way, which I have come across more often, is to get a session-identifier as part of the response for a successful log-in and include that identifier in all your subsequent requests as a parameter. Although this would require a major change in the way the server handles the sessions.