What is the correct way to get the total count of a metric in Graphite - api

I am trying to fetch the total number of successful logins using Graphite's render API.
http://localhost/render?target=hitcount(stats_counts.login.success,"99years",true)&from=-99years&format=json
This query is taking too long to execute (~ 30 seconds).
Is this the correct way to fetch the total number ?

Depends on-
The amount of data that the API call has to go through to render this query.
Granularity- One point per month will be ~30 times faster than a point a day.
Physical host specs- While you query use iostat -x 1 to see if your disk is doing 100% IO.

Related

Kusto Query limitation gets timeout error

I am running a Kusto Query in my Azure Diagnostics where I am querying logs of last 1 week and the query times out after 10 mins. Is there a way I can increase the timeout limits? if yes can someone please guide me the steps. I downloaded Kusto explorer but couldnt see any easy way of connecting my Azure cluster. Need help as how can i increase this timeout duration from inside Azure portal for query I am running?
It seems like 10 minutes are the max value for timeout.
https://learn.microsoft.com/en-us/azure/azure-monitor/service-limits
Query API
Category
Limit
Comments
Maximum records returned in a single query
500,000
Maximum size of data returned
~104 MB (~100 MiB)
The API returns up to 64 MB of compressed data, which translates to up to 100 MB of raw data.
Maximum query running time
10 minutes
See Timeouts for details.
Maximum request rate
200 requests per 30 seconds per Azure AD user or client IP address
See Log queries and language.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/api/timeouts
Timeouts
Query execution times can vary widely based on:
The complexity of the query
The amount of data being analyzed
The load on the system at the time of the query
The load on the workspace at the time of the query
You may want to customize the timeout for the query.
The default timeout is 3 minutes, and the maximum timeout is 10 minutes.

BigQuery Count Appears to be Processing Data

I noticed that running a SELECT count(*) FROM myTable on my larger BQ tables yields long running times, upwards of 30/40 seconds despite the validator claiming the query processes 0 bytes. This doesn't seem quite right when 500 GB queries run faster. Additionally, total row counts are listed under details -> Table Info. Am I doing something wrong? Is there a way to get total row counts instantly?
When you run a count BigQuery still needs to allocate resources (such as: slot units, shards etc). You might be reaching some limits which cause a delay. For example, the slots default per project is 2,000 units.
BigQuery execution plan provides very detail information about the process which can help you better understand the source of the delay.
One way to overcome this is to use an approximate method described in this link
This Slide by Google might also help you
For more details see this video about how to understand the execution plan

Google Bigquery results

I am getting a part of result from the Bigquery API.
Earlier, I solved the issue of 1,00,000 records per result using iterators.
However, now I'm stuck at some other obstacle.
If I take more than 6-7 columns in a result, I do not get the complete set of result.
However, if I take a single column, I get the complete result.
Can there be a size limit as well for results in Bigquery API ?
There are some limits for Query Job
In particular
Maximum response size — 128 MB compressed
Of course, it is unlimited when writing large query results to a destination table (and then reading from there)

How to get cost for a query in BQ

In BigQuery, how can we get the cost for a given query? We are doing a lot of high-compute queries -- https://cloud.google.com/bigquery/pricing#high-compute -- which often multiplies the data processed by 2 or more.
Is there a way to get the "Cost" of a query with the result set?
For the API or the CLI you could use the flat --dry_run which validates the query instead of running it, like so:
cat ../query.sql | bq query --use_legacy_sql=False --dry_run
Output:
Query successfully validated. Assuming the tables are not modified,
running this query will process 9614741466 bytes of data.
For costs, just divide the total bytes by 1024 ^ 4, multiply the result by 5 and then multiply by the Billing Tier you are in and you have the expected cost ($0.043 in this example).
If you already ran the query and want to know how much it processed, you can run:
bq show -j (job_id of your query)
And it'll return Bytes Billed and Billing Tier (looks like you still have to do the math for cost computation).
For WebUI, you can install BQMate and it already estimates costs for you (but you still have to adapt for your Billing Tier).
As a final recommendation, sometimes it's possible to greatly improve performance of analyzes just by optimizing how the query process data (here at our company we had several high computing queries that now process data normally just by using features such as ARRAYS and STRUCTS for instance).

Mesaure upload and download speed in iPhone

I would like to measure the upload and download speed of data in iPhone, is any API available to achieve the same? Is it correct to measure it on the basis of dividing total bytes received with time taken in response?
Yes, it is correct to measure the total bytes / time taken, that is exactly what the speed is. You might want to take an average if you want to constantly show the download speed.., like using 500 bytes and the time it took to download those particular ones.
For doing this you could like have an NSMutableArray, as a buffer, which you empty idk every 2 seconds. Then you do [bufferMutableArray length]/2 and you know how many bytes a second you had those 2 seconds. When you empty the buffer ofc append to the data you are downloading.
There is no direct API to know the speed.
Total data received/sent and time only will give you average speed. There use to be lot of variation in the speed over the time so if you want more accurate value then do the speed calculation based on sampling.
(Data transferred in 1 miniut) /(60 seconds) ---> this solution only if you need greater accuracy in the speed calculation. The sampling duration can changed based on the level of accuracy required.