How do batchUpdate calls count towards usage limits? - google-sheets-api

Calls like spreadsheets.batchUpdate and spreadsheets.values.batchUpdate can take multiple update actions in a single call.
I read about google sheets api usage limits at https://developers.google.com/sheets/api/limits, however it is not clear if these calls would count as one or multiple requests. Could you explain?
Thanks

At spreadsheets.batchUpdate and spreadsheets.values.batchUpdate, even when multiple requests are included in one batch request of batchUpdate, the request uses only one API.
For example, when 10 requests are included in the request body of batchUpdate and the request body is run by batchUpdate, only one API is used.
About the maximum requests in one batchUpdate, I have never investigated this. But in my experience, when I had used 100,000 requests in one batchUpdate, I could confirm that the script worked fine.
If I misunderstood your question, I apologize.

Related

Main difference between GET and POST api calls?

I am getting confused with the difference with GET and POST.
Can you provide some good resource or explanation with examples.
I'm just getting started with this.
Thank you.
GET is for retrieving data from the URL - for example, example.com?tab=settings
'tab' is what you would use to 'GET' the data from
POST is more secure, and allows you to send or retrieve the data directly
Other points mentioned here are:
GET requests can be cached
GET requests remain in the browser history
GET requests can be bookmarked
GET requests should never be used when dealing with sensitive data
GET requests have length restrictions
GET requests are only used to request data (not modify)
POST requests are never cached
POST requests do not remain in the browser history
POST requests cannot be bookmarked
POST requests have no restrictions on data length

Data storage after API call form Postman/SoapUI

I need to create an automated test-setup for some webservies, and plan to use SoapUI or Postman for that. My question is pretty basic. What happnds to the data after a request is made?
E.g. if the response contains data from a system, and display it in the Postman UI, will Postman store the response? Or what will happnd to it after the request?
I'm asking for security purpose and I was not able to find a concrete answer myself. Thank you in advance.
Postman provides us the explicit ways to store data or not. When you try to run a collection then in the settings we can specify if we want to store responses, cookies, etc or not. Configure it as per your need.
As per the official site
"Postman does not track any content of your requests/responses."
Under File--> settings
You can even avoid using the cloud version if you don't want to sync up things
Re SoapUI...
If you call a service once, then the data remains in the UI. If you run a second or third time, then only the last response is shown in he UI.
Once you close SoapUI, the request and response data is gone.
However, you can save the data from every request and response by using a datasink step, should that be what you want.

Azure QnA Maker - add multiple URLs through REST API

I have a working QnA Maker instance, I manually added a few URLs to public websites.
Now I want to add many more URLs. I guess this means mastering the REST API? What method should I call? Any examples to start from?
I found this sample, which got me started:
https://learn.microsoft.com/en-us/azure/cognitive-services/QnAMaker/quickstarts/create-new-kb-python
It's an example of calling the REST API's "Knowledgebase - Create" operation.
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/qnamaker/knowledgebase/create
It crashes if you add more than 10 URLs. Eventually, I found there are limits of 10 URLs on create.
Adding more requires a separate REST call - "Knowledgebase - Update" with an "add" node in the request body.
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/qnamaker/knowledgebase/update
I think this is limited to 5 URLs per call. I extended the python code to loop over my list of URLs and "add" them all. It seems to work but gets slower and slower to complete each call. My guess is QnA Maker re-runs some internal indexing logic over the whole knowledge base on every update call? If so then that limit per call is probably counter-productive.

Which kind of information can be collect about "website third parties"?

I have collected all the requests made by websites with the aim to identify the third-parties through the requests which are made by a website. I used selenium and WebDriver to do that.
These requests can be made by the JavaScript present in the source code of the website or can be dynamically called by the web-page from the advertisements or can be initiated by Google or DoubleClick or Facebook. These requests help to track the data that is being shared by these websites with or without the user consent.
You can see an example of the requests when the browser wants to load this website: www.focuscamera.com/ in this excel file:
https://drive.google.com/file/d/16wNA0dFUehrjPww31TAIj8GZUZ05LsIU/view?usp=sharing
My questions are:
1- which kind of HTTP header field can be used for my analysis if I tend to gather some info about third parties? my goal is to distinguish and differentiate the third party behavior!
For example, the field content-length in the requests indicates the size of the entity-body. So a request with higher content-length means that the third party received and collect more data/information?
2- What does exactly content-length indicates? what does exactly "HTTP request body data" contain?
3- Are there any other HTTP header fields that I can use if I aim to distinguish and differentiate the third party behavior? ( a list of field I collect can be found in sheet1 of the excel file I shared before)
4- Are there any other information on the internet that I can use if I aim to distinguish and differentiate the third party behavior? For example, I use cookiepedia.co.uk in order to know what kind of services third parties provide? is it functionality, performance, or Targeting/advertising?
It sounds like you may be reinventing the wheel here. Take a look at https://webbkoll.dataskydd.net; they provide lots of security and privacy analysis on any site you like. Generate nice visual request maps using https://requestmap.webperf.tools:
Try using that tool on sites like wired.com and forbes.com to see how spectacularly bad it can get!
To answer your questions specifically:
Headers are not massively useful as they are within each request (it's the request itself that's more interesting), but the important ones from a privacy perspective will be Referer and Set-cookie. Content-length does indeed tell you how big the request body is – that will always be 0 on a GET request and so is usually omitted – large post requests indicate more data is being transmitted, but that may be down to inefficiency rather than anything else.
Content-length indicates the length of the data (in bytes) within the body of a POST request. An HTTP request body can contain any kind of data: text, images, video, audio, formatted data.
There are some, but most headers are functional rather than semantic, concerned with making the request actually work. It's more interesting that requests happen at all than what they contain.
You can't necessarily tell what kind of service a third party is providing from the requests themselves, but the domains they are going to are more interesting. For example anything going to doubleclick.com is going to be ad and tracking related because of what that domain is known to be used for (Webbkoll cites these as "known trackers"); So you're correct that sites like cookiepedia can help you find out what a particular service does. The divisions between functional/performance/profiling are mostly made up by ad companies to excuse their behaviour, and you can't tell what they are using data for, only whether they are receiving data, and what data they are receiving (because you can see what's in the requests they make using browser developer tools). To clarify - a site could receive your full name and address, but do absolutely nothing with it; but you can't tell that from looking at the data that's sent. In privacy terms, it's always best to assume the worst (because ad companies absolutely cannot be trusted!), so if they are receiving data, assume it will be abused.

Using HTTP for REST API: automatically cacheable?

I was wondering, to make a "RESTful API" you need to satisfy the 6 architectural constraints listed below:
http://en.wikipedia.org/wiki/Representational_state_transfer#Architectural_constraints
Is it safe to state that when you are creating a REST API over the HTTP protocol, the "cacheable" constraint is automatically satisfied? Because HTTP already provides a cache system "out-of-the-box" through HTTP headers: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
So no need anymore to worry about that?
Maybe sounds like a stupid question, but I want to be sure. :-)
Kind regards!
K.
Let me expand a bit on the challenges of creating correct caching logic:
Typically, the backend of the API is a database holding all kinds of little pieces of information.
The typical presentation within a REST API can be an accumulated view (So, let's say, a users activity log, containing a list of the last user actions within a portal, something along those lines).
Now, in order to know if your API URL /user/123/activity has changed (after the timestamp the client is sending you in the "If-modified-since"-header), you would have to check if there have been any additional activities after the last request. The overhead of doing that might be the same as simply fetching the result again. So, in a lot of cases, people just don't really bother, which is a shame, as proper caching can have a huge impact on Web App performance.
Maybe this gives a bit more detail,
Jan
you are correct, HTTP already gives you the means to identify cacheable elements, but as your API will be generated by some server-side logic, you will still need to make sure the code "behind" your API will se the right HTTP headers and be ready and able to react to "If-modified-since" requests in an ideal world.
Creating a reliable "Last-modified" timestamp as well as checking against it reliably is actually quiet a feat ;-)
Hope this helps a bit,
Jan