How to get total number of requests and total download size for a webpage using Selenium 4.0 and chrome - selenium

I am working with Selenium 4.0 and Chrome. Selenium 4 provides a way to interact with the chrome dev tools (Network).
In my case I need to get following information
Total requests
Total size of data transferred
Total download size
Total time it took for all the requests to finish
Is there a way to do that. I know we can get individual requests but can we get a total for those fields too.

Related

what is meaning of Request Sent in google chrome network tab?

I am trying to debug why my apis call taking so much time. in google chrome network is showing Request Time is 8 min. what does tell mean like browser took 8 min to send request or it took 8 min to server to response first byte.
Your clients have a slow connection, or someone is abusing their
bandwidth.
The data they are trying to access are very large.
The server has slow upload speed.

How many JSON requests can be made on Google Spreadsheet simultaneously?

I want to make a website that will fetch data from Google spreadsheet. So, my question is, how many simultaneous requests for JSON data can Google sheets handle? I will write a script in Google sheets to automatically update data every hour in the sheet.
Thanks :)
If you are pertaining to requests limit, you may check this documentation about Usage Limits.
This version of the Google Sheets API (v4) has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user. Limits for reads and writes are tracked separately. There is no daily usage limit.
To view or change usage limits for your project, or to request an increase to your quota, do the following:
If you don't already have a billing account for your project, then create one.
Visit the Enabled APIs page of the API library in the API Console, and select an API from the list.
To view and change quota-related settings, select Quotas. To view usage statistics, select Usage.
Hope this helps!

Google vision API giving sporadic 403 errors

I have a very basic python app that calls the google vision API and asks for OCR on an image.
It was working fine a few days ago using a basic API key. I have since created a modified version that uses a service account as well, which also worked.
All my images are ~500kB
However, today about 80% of all calls return "403 reauthorized" when I try to run OCR on the image. The remainder run as they always have done...
The google quotas limit page lists:
MB per image 4 MB
MB per request 8 MB
Requests per second 10
Requests per feature per day 700,000
Requests per feature per month 20,000,000
Images per second 8
Images per request 16
And I am way below any of these limits (by orders of magnitude) - any idea what might be going on?
It seems strange that simply running the same code, with the same input images, will sometimes give a 403 and sometimes not....perhaps the error is indicative of the API struggling with demand?
Thanks Dave - yes, I have. After much debugging at my end, it seems it was the API that was up the spout :-) All fine now...

Kimono Desktop crawl stuck at "Queued"

I'm using Kimono Desktop to crawl a site and posting the data to a Firebase endpoint. Everything is working from beginning to end with the exception of auto-running.
When creating or editing the Kimono API with the Chrome extension the desktop app crawls and reports "the last crawl was successful". The next run is listed as the set time to crawl (I've tried 5 min, 30 min, and 1 hour on multiple machines) but when that time elapses the Next Auto-Run says "Queued" but never actually runs.
Kimono Desktop Crawl Setup
The API in the screen shot above is set to crawl every five minutes but has been queued for at least 2 hours 55 minutes without running. And out of curiosity I let one machine go 2 days without another crawl.
Clicking Start Crawl works fine but defeats the purpose of the auto-run. Ideas?
Unfortunately, as far as I'm aware the Kimono Desktop version does not support autorun.
There have been a few people who have found success by calling the api via ondemand with a script at their own determined intervals, which seems to be the only current way to do this now.
I'm sure more people will find ways to tinker with the desktop version as time goes on, but no luck with autorun so far for me unfortunately.
To find info on how to use the ondemand calls for the desktop version, some info on this answer HERE should be able to get you going.

Making a Selenium version of ab

I would like to use Selenium instead of ab for load testing my server, so I created a selenium script that requests the same url over and over again. There would, however, be some information that could help replicate ab better:
1)
Does requesting the same page in two different tabs count as two concurrent requests?
2) How can I use selenium to get information such as the "Resource Size" of a request and count the number of Requests per Second? (I realize that when requesting a page, Selenium uses a real browser, so downloading a page and all of its content would be separate requests.)
3) Also, is there a way to retrieve the number of concurrent connections?
I am currently using the bindings for python, but am willing to use any language with selenium bindings if I need to.
The short answer is NO, none of what you want to do is particularly easy using WebDriver (aka Selenium 2).
1) The same page in two different tabs will likely utilize the browser's cache, so it will not have the same effect as two real users on different machines hitting the page. You can control the cache operation of some browsers, but maybe not well enough for your purposes.
2 & 3) WebDriver operates against the DOM, which AFAIK does not have access to the low-level network stats you are looking for.