Query exhausted resources at this scale factor - sql

I was running SQL query on Amazon Athena. And I got the following error couple of times:
Query exhausted resources at this scale factor
This query ran against the "test1" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: *************

Without seeing the query it's hard to say for sure what the problem is, but it's very likely that it is due to an internal issue in Athena that has to do with sorting of large intermediary result sets.
The version of Presto that Athena uses does not have support for sorting datasets that are too big to fit in memory. It used to be the same for aggregations too, but that has been fixed by the Athena team.
The issue most often occurs when you have very wide tables, i.e. many columns, or columns with a lot of data. Each individual row can represent a big chunk of memory, and if a node runs out of memory while trying to sort its chunk the query aborts with the "query exhausted resources at this scale factor" error.
If this matches your situation the only way around is unfortunately to limit the number of columns, or eliminate the sorting. Sometimes you can rearrange the query to do the sorting at a different stage to make the memory pressure on the sorting stage lower.

Review these tips and try to refine your query.
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/
This error means that aggregated results exceeded allocated resources. I believe it is the memory.

Related

In BigQuery SQL, how to track resource usage to avoid "Resources exceeded during query execution" error

In BigQuery, Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex. is an error we receive when our queries are too big or complex.
For queries that run successfully, how can we see how close we are to receiving this error? What is the criteria for resources here, and what are our limits? We have a few large queries that may be pushing close to this error, but we don't want to unexpectedly receive the error one day as our source table for this query continues to grow.
It is easy for us to find bytes queried, as the editor makes it quite clear. And after executing a query, we can see the following in Execution Details
Is one of either Slot time consumed or Bytes shuffled the relevant metric here, with their being some limit after which the Resources exceeded error is thrown? Or is there some other criteria that determines when this error is thrown?
For added context, none of our tables query a particularly large amount of data, our largest query is under 20GB processed, however our larger queries do have a lot of sub-queries with t1 as (), t2 as (), t3 as (), t4 as (), t5 as (), ..., where the subqueries also reference the previous sub-queries, and I believe this is what is leading to the resources exceeded errors.
Edit: Output from our largest query, the one that prompted this posting. I refactored 2 of the subqueries into their own CTEs, so that this big query run would be successful. This obviously still seems large...
We've encountered this error quite a bit. According to this article: https://medium.com/#jamiekt/what-to-do-about-bigquery-error-resources-exceeded-during-query-execution-e80734b8c9b6 BigQuery computes a complexity score and will output an error if it crosses a threshold.
At this time, there is no easy way to know the complexity score of a query and how close to the limit it is.
However, if a complex query is working, that same query should keep working even if the underlying data continuously increases in size. Or rather, it shouldn't fail with a "Query is too complex" error as the complexity score would remain unchanged, but could end up failing with other errors, such as "The query could not be executed in the allotted memory".
To help alleviate the "Query is too complex" it is recommended to use temp tables (https://cloud.google.com/bigquery/docs/writing-results), avoid UNIONs when possible, reducing the amount of CTEs and nested views.

How to Let Spark Handle Bigger Data Sets?

I have a very complex query that needs to join 9 or more tables with some 'group by' expressions . Most of these tables have almost the same of numbers of the rows. These tables also have some columns that can be used as the 'key' to partition the tables.
Previously, the app ran fine, but now the data set has 3~4 times data as before. My tests turned out if the row count of each table is less than 4,000,000, the application can still run pretty nicely. However, if the count is more than that, the application writes hundreds of terabytes of shuffling and the application stalls (no matter how I adjust the memory, partition, executors, etc.). The actual data probably is just dozens of Gs.
I would think that if the partitioning works properly, Spark shouldn't do shuffle so much and the join should be done on each node. It is puzzling that why Spark is not so 'smart' to do so.
I could split the data set (with the 'key' I mentioned above) into many data sets that these data sets can be dealt with independently. But the burden will be on myself...it discounts the very reason to use Spark. What other approaches that could help?
I use Spark 2.0 over Hadoop YARN.
My tests turned out if the row count of each table is less than 4,000,000, the application can still run pretty nicely. However, if the count is more than that, the application writes hundreds of terabytes of shuffling
When joining datasets if the size of one side is less than a certain configurable size, spark broadcasts the entire table to each executor so that join may be performed locally everywhere. Your above observation is consistent with this. You can also provide broadcast hint explicitly to the spark, like so df1.join(broadcast(df2))
Other than that, can you please provide more specifics about your problem?
[Sometime ago I was also grappling with the issue of join and shuffles for one of our jobs that had to handle couple of TBs. We were using RDDs (and not the dataset api). I wrote about my findings [here]1. These may be of some use to you are try to reason about the underlying data shuffle.]
Update: According to documentation -- spark.sql.autoBroadcastJoinThreshold is the configurable property key. 10 MB is its default value. And it does the following:
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command ANALYZE TABLE COMPUTE STATISTICS noscan has been run.
So apparently, this is supported only for the Hive tables.

Getting "Query too large" in BigQuery

I am storing event data in BigQuery, partitioned by day - one table per day. The following query failed:
select count(distinct event)
from TABLE_DATE_RANGE(my_dataset.my_dataset_events_, SEC_TO_TIMESTAMP(1391212800), SEC_TO_TIMESTAMP(1393631999))
Each table is about 8GB in size.
Has anyone else experienced this error? Seems like it's limited by table size, because in this query, I've only limited it to just one column. When I use a smaller time range, it works.. but the whole point of using BigQuery was its support for large datasets.
"Query too large" in this case means that the TABLE_RANGE is getting expanded internally to too many tables, generating an internal query that is too large to be processed.
This has 2 workarounds:
Query less tables (could you aggregate these tables into a bigger one?).
Wait until the BQ team solves this issue internally. Instead of using a workaround, you should be able to run this query unchanged. Just not today :).

What causes "resources exceeded" in BigQuery?

My query failed with the error "resources exceeded". What causes this error, and how can I fix it?
Update (2016-03-16): For most queries, EACH is no longer required, and may actually increase the likelihood of seeing this error. If you omit the EACH keyword from every JOIN and GROUP BY in your query, the query engine will now dynamically optimize your query to eliminate this error.
There are still corner cases where specifying the EACH keyword can make a query run (or run faster), but generally speaking the BigQuery team recommends that you try your query without EACH first. Pretty soon, the EACH keyword will become a complete no-op.
Original answer: When you use the EACH keyword in JOIN EACH or GROUP EACH BY, or when you use a PARTITION BY clause, BigQuery partitions ("shuffles") your data on the fly according to the join keys or group keys, which allows each worker task to perform its portion of the join or aggregation locally.
The resources exceeded error occurs when one such worker gets too much data, and run over its limit. Generally speaking, the reasons for this error fall into two categories:
Skew: The data is heavily skewed toward one key value (say, a "guest" user ID or a null key), which means that one worker gets all the records for that key and gets overloaded.
Mismatch in data size and worker count: You have too much data for the number of workers that BigQuery assigned your query.
We are working on a number of improvements to help us cope with both scenarios so that you don't need to worry about these issues. For now, though, you can work around the problem with one of the following approaches:
Filter out skewed keys. If your data is skewed because half of your join key values are actually null, you could filter those out by adding WHERE key IS NOT NULL prior to the join.
Reduce the amount of data processed. Filter each side of the join with WHERE ABS(HASH(key)) % 5 == 0 to apply the join to only 1/5 of the data (or whatever fraction you want), and then do the same for == 1, == 2, == 3, == 4 in separate queries. You're manually sharding the data in smaller chunks to make the query go through--but note that you pay 5x as much because you queried the same data 5 times.
Revisit your query. Maybe you can build your query in a completely different way, or compute some intermediate results, to get the answer you want.
Also faced the error
Error: Resources exceeded during query execution
due to using an ORDER BY. More information about that is given by Pentium10
Using order by on big data databases is not an ordinary operation and
at some point it exceeds the attributes of big data resources. You
should consider sharding your query or run the order by in your
exported data.
As I explained to you today in your other question, adding
allowLargeResults will allow you to return large response, but you
can't specify a top-level ORDER BY, TOP or LIMIT clause. Doing so
negates the benefit of using allowLargeResults, because the query
output can no longer be computed in parallel.
To solve it I've gone through 9 steps

Understanding "Resources exceeded during query execution" with GROUP EACH BY in BigQuery

I'm writing a background job to automatically process A/B test data in BigQuery, and I'm finding that I'm hitting "Resources exceeded during query execution" when doing large GROUP EACH BY statements. I saw from Resources Exceeded during query execution that reducing the number of groups can make queries succeed, so I split up my data into smaller pieces, but I'm still hitting errors (although less frequently). It would be nice to get a better intuition about what actually causes this error. In particular:
Does "resources exceeded" always mean that a shard ran out of memory, or could it also mean that the task ran out of time?
What's the right way to approximate the memory usage and the total memory I have available? Am I correct in assuming each shard tracks about 1/n of the groups and keeps the group key and all aggregates for each group, or is there another way that I should be thinking about it?
How is the number of shards determined? In particular, do I get fewer shards/resources if I'm querying over a smaller dataset?
The problematic query looks like this (in practice, it's used as a subquery, and the outer query aggregates the results):
SELECT
alternative,
snapshot_time,
SUM(column_1),
...
SUM(column_139)
FROM
my_table
CROSS JOIN
[table containing 24 unix timestamps] timestamps
WHERE last_updated_time < timestamps.snapshot_time
GROUP EACH BY alternative, user_id, snapshot_time
(Here's an example failed job: 124072386181:job_XF6MksqoItHNX94Z6FaKpuktGh4 )
I realize this query may be asking for trouble, but in this case, the table is only 22MB and the query results in under a million groups and it's still failing with "resources exceeded". Reducing the number of timestamps to process at once fixes the error, but I'm worried that I'll eventually hit a data scale large enough that this approach as a whole will stop working.
As you've guessed, BigQuery chooses a number of parallel workers (shards) for GROUP EACH and JOIN EACH queries based on the size of the tables being operated upon. It is a rough heuristic, but in practice, it works pretty well.
What is interesting about your query is that the GROUP EACH is being done over a larger table than the original table because of the expansion in the CROSS JOIN. Because of this, we choose a number of shards that is too small for your query.
To answer your specific questions:
Resources exceeded almost always means that a worker ran out of memory. This could be a shard or a mixer, in Dremel terms (mixers are the nodes in the computation tree that aggregate results. GROUP EACH BY pushes aggregation down to the shards, which are the leaves of the computation tree).
There isn't a good way to approximate the amount of resources available. This changes over time, with the goal that more of your queries should just work.
The number of shards is determined by the total bytes processed in the query. As you've noticed, this heuristic doesn't work well with joins that expand the underlying data sets. That said, there is active work underway to be smarter about how we pick the number of shards. To give you an idea of scale, your query got scheduled on only 20 shards, which is a tiny fraction of what a larger table would get.
As a workaround, you could save the intermediate result of the CROSS JOIN as a table, and running the GROUP EACH BY over that temporary table. That should let BigQuery use the expanded size when picking the number of shards. (if that doesn't work, please let me know, it is possible that we need to tweak our assignment thresholds).