Kusto Query limitation gets timeout error - kql

I am running a Kusto Query in my Azure Diagnostics where I am querying logs of last 1 week and the query times out after 10 mins. Is there a way I can increase the timeout limits? if yes can someone please guide me the steps. I downloaded Kusto explorer but couldnt see any easy way of connecting my Azure cluster. Need help as how can i increase this timeout duration from inside Azure portal for query I am running?

It seems like 10 minutes are the max value for timeout.
https://learn.microsoft.com/en-us/azure/azure-monitor/service-limits
Query API
Category
Limit
Comments
Maximum records returned in a single query
500,000
Maximum size of data returned
~104 MB (~100 MiB)
The API returns up to 64 MB of compressed data, which translates to up to 100 MB of raw data.
Maximum query running time
10 minutes
See Timeouts for details.
Maximum request rate
200 requests per 30 seconds per Azure AD user or client IP address
See Log queries and language.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/api/timeouts
Timeouts
Query execution times can vary widely based on:
The complexity of the query
The amount of data being analyzed
The load on the system at the time of the query
The load on the workspace at the time of the query
You may want to customize the timeout for the query.
The default timeout is 3 minutes, and the maximum timeout is 10 minutes.

Related

What are 10 database transaction units in the Azure free trial?

I am looking for a cloud service provider to host a SQL DB in and access through API calls. After looking through multiple providers I have seen that Azure has a 12-month free trial but only 250 GB S0 instance with 10 database transaction units.
Could anyone explain to be what they mean by 10 DB transaction units? Any help is greatly appreciated.
For reference our database would not be large in scale just holding candidate and judges applications which we only get maximum 600 candidates per year.
I tried looking transactional units online and saw it make be a single REST API call which seems absurd to me.
Please examine the output of the following query:
SELECT * FROM sys.dm_user_db_resource_governance
That will tell you the following information about the current service tier:
min_cores (cores available on the service)
max_dop (the MAX_DOP value for the user workload)
max_sessions (the maximum number of sessions allowed)
max_db_max_size_in_mb (the maximum max_size value for a data file, in MB)
log_size_in_mb
instance_max_worker_threads (worker thread limit for the SQL Server instance)
The above information will give the details of what 10 DTU means in terms of resources available. You can run this query every time you change service tier of the database.

How to interpret query process GB in Bigquery?

I am using a free trial of Google bigquery. This is the query that I am using.
select * from `test`.events where subject_id = 124 and id = 256064 and time >= '2166-01-15T14:00:00' and time <='2166-01-15T14:15:00' and id_1 in (3655,223762,223761,678,211,220045,8368,8441,225310,8555,8440)
This query is expected to return at most 300 records and not more than that.
However I see a message like this as below
But the table on which this query operates is really huge. Does this indicate the table size? However, I ran this query multiple times a day
Due to this, it resulted in error below
Quota exceeded: Your project exceeded quota for free query bytes scanned. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors
How long do I have to wait for this error to go-away? Is the daily limit 1TB? If yes, then I didn't not use close to 400 GB.
How to view my daily usage?
If I can edit quota, can you let me know which option should I be editing?
Can you help me with the above questions?
According to the official documentation
"BigQuery charges for queries by using one metric: the number of bytes processed (also referred to as bytes read)", regardless of how large the output size is. What this means is that if you do a count(*) on a 1TB table, you will supposedly be charged $5, even though the final output is very minimal.
Note that due to storage optimizations that BigQuery is doing internally, the bytes processed might not equal to the actual raw table size when you created it.
For the error you're seeing, browse the Google Console to "IAM & admin" then "Quotas", where you can then search for quotas specific to the BigQuery service.
Hope this helps!
Flavien

What is the API usage (requests per seconds) limit of Amadeus test environment?

I am trying to call Amadeus API in parallel (/v1/shopping/hotel-offers) in the test environment. Unfortunately when I start 3 threads simultaneously, then only the very first one gets the OK response and the others get HTTP 429 Too Many Requests responses.
I have not exceeded the monthly limit quota yet, so that error is really related to the parallel execution.
Does anybody know what are the exact limits (#requests/sec or #requests in parallel) ? Is it even possible to have more than one request at a time ?
The throttling is not the same depending of the environment:
Test: 10 transactions per sec per user (10 TPS/user) -> With the constrains: not more than 1 request every 100ms.
Production: 20 transactions per sec per user (20 TPS/user) -> With the constraint: not more than 1 request every 50ms.

BigQuery GUI - CPU Resource Limit

Is there a way to set the CPU Resource Limit on the BigQuery with Python and GUI?
I'm getting an error of:
Query exceeded resource limits. 2147706.163729571 CPU seconds were used, and this query must use less than 46300.0 CPU seconds.
Looking at the BigQuery's Python reference page: http://google-cloud-python.readthedocs.io/en/latest/bigquery/reference.html
It looks like there's:
1. maximum_billing_tier
2. maximum_bytes_billed
That can be set, but there is no CPU second options.
You cannot set anymore maximum_billing_tier - it is obsolete and as soon as you are lower than tier 100 you are billed as if it were 1. if you exceed 100 - query just failes.
As of CPU - check concept of slots
Maximum concurrent slots per project for on-demand pricing — 2,000
The default number of slots for on-demand queries is shared among all queries in a single project. As a rule, if you're processing less than 100 GB of queries at once, you're unlikely to be using all 2,000 slots.
To check how many slots you're using, see Monitoring BigQuery Using Stackdriver. If you need more than 2,000 slots, contact your sales representative to discuss whether flat-rate pricing meets your needs.
See more at https://cloud.google.com/bigquery/quotas#query_jobs

What is the correct way to get the total count of a metric in Graphite

I am trying to fetch the total number of successful logins using Graphite's render API.
http://localhost/render?target=hitcount(stats_counts.login.success,"99years",true)&from=-99years&format=json
This query is taking too long to execute (~ 30 seconds).
Is this the correct way to fetch the total number ?
Depends on-
The amount of data that the API call has to go through to render this query.
Granularity- One point per month will be ~30 times faster than a point a day.
Physical host specs- While you query use iostat -x 1 to see if your disk is doing 100% IO.