Save API response to the browser? - api

I am fetching some date from an API and need to have access to this same data on another page, but don't want to request the same data again from the server.
I am trying to find a more efficient and quicker way to access the data at the second request.
Should I save the first request response saved in the browser, session storage, local storage or cookie?

Definitely not cookie!
Saving data in LocalStorage or IndexedDB is how it's usually done.
Basically you need to read about Service Workers, those are used for caching and many other things as well, but most tutorials start with caching - exactly what you need.
I would recommend to start here: Google Progressive Web Apps Training

Related

Whatsapp Cloud API Save Session Attributes

I am creating a component in AWS Lambda that is responsible for receiving a WhatsApp message, retrieving the text and sending it to another system.
This other system is capable of connecting to multiple cognitive engines, recovering the user's intention and generating a correct response.
Before getting into the trouble of saving information in DynamoDB, I wanted to find out if it was possible to save a field in the Whatsapp session.
I've read the documentation and I've done a lot of research on the Postman provided by Meta and I don't see how to do it or if it's possible to do it.
Basically I need to save a session id from the other system to be able to keep up with the conversation.
I've read a lot of the documentation and I don't see anything that can help me.
API Whats App Cloud
Thank you very much for the help.

Restrict API access to my Frontend app using daily random API key

I'm creating an API that will be accessed solely by my Vue.js application for displaying data like movie times, video on demand links, etc. No private sensitive info, but I would like to avoid other people and bots using my resources to get free data. I know restricting an API to my own one-page frontend application is almost impossible, as someone can always either:
Get the API key from the page source
Spoof the referrer header to the one that my API is restricted to
So I was thinking to "attenuate the damage", i.e. the amount of bots using my API, by having the backend server generate an API key every day at noon for example. Then when the PHP loads the Vue.js application, it inserts that API key in the Vue.js code, which will use it to query my Python API. If Vue gets an "incorrect API key" error (case when the page was loaded at 11:59 and a request was sent at 12:01), the Vue.js would refresh the page to get the new key.
This way, if someone took the API key from the source, it would expire in less than 24 hours anyways. Of course someone could scrape the page to get the API key every day and still use the API, but I feel this would stop a lot of bots and spammers.
Has anyone ever tried anything like this? Does it sound like a viable solution or there is something better to be done that I couldn't find on StackOverflow?
How about Server-side Rendering from the same location (server, private DNS) as the API server?
You can write fetch(api) within the server function itself.

What's the maximum data that vuex can store?

I have a blog app using vuex to store the post data that users have visited so that if they visit the same post again they don't need to fetch the data from the server again.
Is it a good idea to store all those post data in vuex?
Will it slow down the app?
Are there any memory leak issues with this approach?
Your store is held completely in memory. That means you have as much storage available as the user device allows you to use memory.
Most apps stay around 30-100 MB memory usage. You should try to stay in this range as well (nobody likes insanely memory hungry apps that slow down your computer).
That being said, you probably fetch your blog posts from a server. Hence, your browser will be able to just cache these requests so he does not have to load them again.
What you should look into instead is how to set up a browser cache policy. This is set in the headers of your server response as 'exipres' header.

Is there a way to protect data from being scraped in a PWA?

Let’s say I have a client who has spent a lot of time and money creating a custom database. So there is a need for extra data security. They have concerns that the information from the database could get scraped if they allow access to it from a normal web app. A secure login won’t be enough; someone could log in and then scrape the data. Just like any other web app, a PWA won't protect against this.
My overall opinion is that sensitive data would be better protected on a hybrid app that has to be installed. I am leaning toward React-Native or Ionic for this project.
Am I wrong? Is there a way to protect the data from being scraped in a PWA?
There is no way to protected data visible to browser client regardless of technology - simple HTML or PWA/hybrid app.
Though you can make it more difficult.
Enforce limits on how many information a client can fetch per minute/hour/day. The one who exceed limits can be blocked/sued/whatever.
You can return some data as images rather than text. Would make extraction process a bit more difficult but would complicate your app and will use more bandwidth.
If we are talking about a native/hybrid app it can add few more layers to make it more secure:
Use HTTPS connection and enforce check for valid certificate.
Even better if you can check for a specific certificate so it's not replaced by a man-in-the-middle.
I guess iOS app would be more secure then Android as Android is easier to decompile and run modified version with removed restrictions.
Again, rate limiting seems to be the most cost effective solution.
On top of rate limiting, you can add some sort of pattern limiting. For example, if a client requests data with regular intervals close to limits, it is logical to think that requests are from a robot and data is being scrapped.
HTTPS encrypts the data being retrieved from your API, so it could not be 'sniffed' by a man in the middle.
The data stored in the Cache and IndexedDB is somewhat encrypted, which makes it tough to access.
What you should do is protect access to the data behind authentication.
The only way someone could get to the stored data is by opening the developer tools and viewing the data in InsdexedDB. Right now you can only see a response has been cached in the Cache database.
Like Alexander says, a hybrid or native application will not protect the data any better than a web app.

Google plus determine changes in network

I am trying to determine changes in the Google+ network in an efficient manner (profile changes). My first idea was to use the eTags of the People.List and People.Get. My assumption was that the eTag in the List (person) would be the same as the one in the Get. This is not the case!
I rather not want to get the details of all the people in the network and checking the eTag for each of them. I will run out of daily api-calls very quickly using that scenario.
Are there any other ways of determining the changes in the network?
Thanks!
I'm not aware of a way to notify your service when changes occur on a user's profile. I don't think that etags will work for what you are trying to do and the client libraries should already be using the etags to manage any query caching. You can perform a few tricks to make queries lighter on your backend though:
Batch API calls
Use a fields filter to just only get the data that matters for your application
If you are running out of quota, you can also request to have your limits raised from the Google APIs console by clicking the Quotas link on the left. The developer relations team from Google+ checks the request regularly and will raise your quota limits if your usage justifies it.