There is a distinct query on a single table
select distinct d, e, f, a, b, c from t where a = 1 and e = 2;
The number of distinct values in cols a, b, c are high (high column cardinality) and cols d, e, f are low cardinality columns. My data is in ORC format in S3, and I have external table in Athena and Redshift spectrum pointing to the same file.
When above query is run in athena it comes back in couple of secs, whereas in redshift spectrum it takes couple of minutes.
But when I move col f at the end of the select list, it works fine in Redshift spectrum too. This happens for only for this particular column, I mean moving d or e at the end does not make any difference i.e. they run longer. The col f is a varchar column as are others and the max length of this column is 30 bytes.
Two questions
(a) Any insight or pointers to the peculiar behavior where moving col f to the end of the list makes it run faster whereas putting it in between makes it slower
(b) Is there a recommended SQL best practice to list the columns in decreasing order of column cardinality in distinct or group by statements? Does it make difference in the execution times if columns of lower cardinality are put first or if they are put in mixed arrangement?
Updating your Redshift driver to the latest version can usually bring your Redshift Spectrum speed almost in line with Athena.
https://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html#download-jdbc-driver
This may not be the cause in your use case but it is definitely worth a try!
Related
I have a column in my tables called 'data' with JSONs in it like below:
{"tt":"452.95","records":[{"r":"IN184366","t":"812812819910","s":"129.37","d":"982.7","c":"83"},{"r":"IN183714","t":"8028028029093","s":"33.9","d":"892","c":"38"}]}
I have written a code to unnest it into separate columns like tr,r,s.
Below is the code
with raw as (
SELECT json_extract_path_text(B.Data, 'records', true) as items
FROM tableB as B where B.date::timestamp between
to_timestamp('2019-01-01 00:00:00','YYYY-MM-DD HH24:MA:SS') AND
to_timestamp('2022-12-31 23:59:59','YYYY-MM-DD HH24:MA:SS')
UNION ALL
SELECT json_extract_path_text(C.Data, 'records', true) as items
FROM tableC as C where C.date-5 between
to_timestamp('2019-01-01 00:00:00','YYYY-MM-DD HH24:MA:SS') AND
to_timestamp('2022-12-31 23:59:59','YYYY-MM-DD HH24:MA:SS')
),
numbers as (
SELECT ROW_NUMBER() OVER (ORDER BY TRUE)::integer- 1 as ordinal
FROM <any_random_table> limit 1000
),
joined as (
select raw.*,
json_array_length(orders.items, true) as number_of_items,
json_extract_array_element_text(
raw.items,
numbers.ordinal::int,
true
) as item
from raw
cross join numbers
where numbers.ordinal <
json_array_length(raw.items, true)
),
parsed as (
SELECT J.*,
json_extract_path_text(J.item, 'tr',true) as tr,
json_extract_path_text(J.item, 'r',true) as r,
json_extract_path_text(J.item, 's',true)::float8 as s
from joined J
)
select * from parsed
The above code is working when there are small number of records but this taking more than a day to run and CPU utilization (in redshift) is reaching 100 % and even the disk space used also reaching 100% if I am putting date between last two years etc.. or if the number of records is large.
Can anyone please suggest any alternative way to unnnest JSON objects like above in redshift.
My query plan is saying:
Nested Loop Join in the query plan - review the join predicates to avoid Cartesian products
Goal: To Unnest without using any cross joins
Input: data column having JSON
"tt":"452.95","records":[{"r":"IN184366","t":"812812819910","s":"129.37","d":"982.7","c":"83"},{"r":"IN183714","t":"8028028029093","s":"33.9","d":"892","c":"38"}]}
Output should be for example
tr,r,s columns from the above json
You want to unnest json records of up to 1000 stored in a json array but nested loop join is taking too long.
The root issues is likely your data model. You have stored structured records (called "records"), inside a semi-structure text element (json), within a column of a structured columnar database. You want to perform some operation on these buried records that you haven't described but here's the problem. Columnar databases are optimized for performing read-centric analytic queries but you need to expand these json internal records into Redshift rows (records) which is fundamentally a write operation. This is working against the optimizations of the database.
The size of this expanding data is also large as compared to your disk storage on your cluster which is why the disks are filling up. You CPUs are likely spinning unpacking the jsons and managing overloaded disk and memory capacity. At the edge of filling up disks Redshift shifts to a mode that optimizes disk space utilization at the expense of execution speed. A larger cluster may give you a significantly faster execution if you can avoid this effect but that will cost money you may not have budgeted. Not an ideal solution.
One area that would improve speed of your query is not carrying all the data along. You keep raw.* and J.* all through the query but it is not clear you need these. Since part of the issue is data size during execution and that this execution includes loop joining, you are making the execution much harder that it needs to be by carrying all this data (including the original jsons).
The best way out of this situation is to change your data model and expand these json internal records into Redshift records on ingestion. Json data is fine for seldom used information or information that is only needed at the end of a query where the data is small. Needing the expanded json at the input end of the query for such a large amount of data is not good use case for json in Redshift. Each of these "records" inside of the json are records and need to be stored as such if you need to work across them as query input.
Now you want to know if there is some slick way to get around this issue in your case and the answer is "unlikely but maybe". Can you describe how you are using the final values in your query (t, r, and s)? If you are just using some aspect of this data (max value or sum or ...) then there may be a way to get to the answer without the large nested loop join. But if you need all the values then there is no other way to get these AFAIK. A description of what comes next in the data process could open up such an opportunity.
How can I optimize a query of this form since I keep running into this OOM error? Or come up with a better execution plan? If I removed the substring clause, the query would work fine, suggesting that this takes a lot of memory.
When the job fails, the beeline output shows the OOM Java heap space. Readings online suggested that I increase export HADOOP_HEAPSIZE but this still results in the error. Another thing I tried was increasing the hive.tez.container.size and hive.tez.java.opts (tez heap), but still has this error. In the YARN logs, there would be GC overhead limit exceeded, suggesting a combination of not enough memory and/or the query plan is extremely inefficient since it can't collect back enough memory.
I am using Azure HDInsight Interactive Query 4.0. 20 worker node, D13v2 8 core, and 56GB RAM.
Source table
create external table database.sourcetable(
a,
b,
c,
...
(183 total columns)
...
)
PARTITIONED BY (
W string,
X int,
Y string,
Z int
)
Target Table
create external table database.NEWTABLE(
a,
b,
c,
...
(187 total columns)
...
W,
X,
Y,
Z
)
PARTITIONED BY (
aAAA,
bBBB
)
Query
insert overwrite table database.NEWTABLE partition(aAAA, bBBB, cCCC)
select
a,
b,
c,
...
(187 total columns)
...
W,
X,
Y,
Z,
cast(a as string) as aAAA,
from_unixtime(unix_timestamp(b,'yyMMdd'),'yyyyMMdd') as bBBB,
substring(upper(c),1,2) as cCCC
from database.sourcetable
If everything else is okay, try to add distribute by partiton key at the end of your query:
from database.sourcetable
distribute by aAAA, bBBB, cCCC
As a result each reducer will create only one partition file, consuming less memory
Try sorting the partitioned columns:
SET hive.optimize.sort.dynamic.partition=true;
When enabled, dynamic partitioning column will be globally sorted. This way we can keep only one record writer open for each partition value in the reducer thereby reducing the memory pressure on reducers.
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties
I want to resuse value of calculated columns in a new third column.
For example, this query works:
select
countif(cond1) as A,
countif(cond2) as B,
countif(cond1)/countif(cond2) as prct_pass
From
Where
Group By
But when I try to use A,B instead of repeating the countif, it doesn't work because A and B are invalid:
select
countif(cond1) as A,
countif(cond2) as B,
A/B as prct_pass
From
Where
Group By
Can I somehow make the more readable second version work ?
Is this first one inefficient ?
You should construct a subquery (i.e. a double select) like
SELECT A, B, A/B as prct_pass
FROM
(
SELECT countif(cond1) as A,
countif(cond2) as B
FROM <yourtable>
)
The same amount of data will be processed in both queries.
In the subquery one you will do only 2 countif(), in case that step takes a long time then doing 2 instead of 4 should be more efficient indeed.
Looking at an example using bigquery public datasets:
SELECT
countif(homeFinalRuns>3) as A,
countif(awayFinalRuns>3) as B,
countif(homeFinalRuns>3)/countif(awayFinalRuns>3) as division
FROM `bigquery-public-data.baseball.games_post_wide`
or
SELECT A, B, A/B as division FROM
(
SELECT countif(homeFinalRuns>3) as A,
countif(awayFinalRuns>3) as B
FROM `bigquery-public-data.baseball.games_post_wide`
)
we can see that doing all in one (without a subquery) is actually slightly faster. (I ran the queries 6 times for different values of the inequality, 5 times was faster and one time slower)
In any case, the efficiency will depend on how taxing is to compute the condition in your particular dataset.
In June the BQ team announced support for date-partitioned tables. But the guide is missing how to migrate old non-partitioned tables into the new style.
I am looking for a way to update several or if not all tables to the new style.
Also outside of DAY type partitioned what other options are available? Does the BQ UI show this, as I wasn't able to create such a new partitioned table from the BQ Web UI.
What works for me is the following set of queries applied directly in the big query (big query create new query).
CREATE TABLE (new?)dataset.new_table
PARTITION BY DATE(date_column)
AS SELECT * FROM dataset.table_to_copy;
Then as the next step I drop the table:
DROP TABLE dataset.table_to_copy;
I got this solution from https://fivetran.com/docs/warehouses/bigquery/partition-table
using only step 2
from Pavan’s answer: Please note that this approach will charge you the scan cost of the source table for the query as many times as you query it.
from Pentium10 comments: So suppose I have several years of data, I need to prepare different query for each day and run all of it, and suppose I have 1000 days in history, I need to pay 1000 times the full query price from the source table?
As we can see - the main problem here is on having full scan for each and every day. The rest is less of a problem and can be easily scripted out in any client of the choice
So, below is to - How to partition table while avoid full table scan for each and every day?
Below step-by-step shows the approach
It is generic enough to extend/apply to anyone real use-case - meantime I am using bigquery-public-data.noaa_gsod.gsod2017 and I am limiting "exercise" to just 10 days to keep it readable
Step 1 – Create Pivot table
In this step we
a) compress each row’s content into record/array
and
b) put them all into respective ”daily” column
#standardSQL
SELECT
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170101' THEN r END) AS day20170101,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170102' THEN r END) AS day20170102,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170103' THEN r END) AS day20170103,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170104' THEN r END) AS day20170104,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170105' THEN r END) AS day20170105,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170106' THEN r END) AS day20170106,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170107' THEN r END) AS day20170107,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170108' THEN r END) AS day20170108,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170109' THEN r END) AS day20170109,
ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170110' THEN r END) AS day20170110
FROM (
SELECT d, r, ROW_NUMBER() OVER(PARTITION BY d) AS line
FROM (
SELECT
stn, CONCAT('day', year, mo, da) AS d, ARRAY_AGG(t) AS r
FROM `bigquery-public-data.noaa_gsod.gsod2017` AS t
GROUP BY stn, d
)
)
GROUP BY line
Run above query in Web UI with pivot_table (or whatever name is preferred) as a destination
As we can see - here we will get table with 10 columns – one column for one day and schema of each column is a copy of schema of original table:
Step 2 – Processing partitions one-by-one ONLY scanning respective column (no full table scan) – inserting into respective partition
#standardSQL
SELECT r.*
FROM pivot_table, UNNEST(day20170101) AS r
Run above query from Web UI with destination table named mytable$20160101
You can run same for next day
#standardSQL
SELECT r.*
FROM pivot_table, UNNEST(day20170102) AS r
Now you should have destination table as mytable$20160102 and so on
You should be able to automate/script this step with any client of your choice
There are many variations of how you can use above approach - it is up to your creativity
Note: BigQuery allows up to 10000 columns in table, so 365 columns for respective days of one year is definitely not a problem here :o)
Unless there is a limitation on how far back you can go with new partitions – I heard (but didn’t have chance to check yet) there is now no more than 90 days back
Update
Please note:
Above version has a little extra logic of packing all aggregated cells into as least final number of rows as possible.
ROW_NUMBER() OVER(PARTITION BY d) AS line
and then
GROUP BY line
along with
ARRAY_CONCAT_AGG(…)
does this
This works well when row size in your original table is not that big so final combined row size still will be within rows size limit that BigQuery has (which I believe is 10 MB as of now)
If your source table already has row size close to that limit – use below adjusted version
In this version – grouping is removed such that each row has only value for one column
#standardSQL
SELECT
CASE WHEN d = 'day20170101' THEN r END AS day20170101,
CASE WHEN d = 'day20170102' THEN r END AS day20170102,
CASE WHEN d = 'day20170103' THEN r END AS day20170103,
CASE WHEN d = 'day20170104' THEN r END AS day20170104,
CASE WHEN d = 'day20170105' THEN r END AS day20170105,
CASE WHEN d = 'day20170106' THEN r END AS day20170106,
CASE WHEN d = 'day20170107' THEN r END AS day20170107,
CASE WHEN d = 'day20170108' THEN r END AS day20170108,
CASE WHEN d = 'day20170109' THEN r END AS day20170109,
CASE WHEN d = 'day20170110' THEN r END AS day20170110
FROM (
SELECT
stn, CONCAT('day', year, mo, da) AS d, ARRAY_AGG(t) AS r
FROM `bigquery-public-data.noaa_gsod.gsod2017` AS t
GROUP BY stn, d
)
WHERE d BETWEEN 'day20170101' AND 'day20170110'
As you can see now - pivot table (sparce_pivot_table) is sparse enough (same 21.5 MB but now 114,089 rows vs. 11,584 rows in pivot_table) so it has average row size of 190B vs 1.9KB in initial version. Which is obviously about 10 times less as per number of columns in the example.
So before using this approach some math needs to be done to project/estimate what and how can be done!
Still: each cell in pivot table is sort of JSON representation of whole row in original table. It is such as it holds not just values as it was for rows in original table but also has a schema in it
As such it is quite verbose - thus the size of cell can be multiple times bigger than original size [which limits the usage of this approach ... unless you get even more creative :o) ... which is still plenty of areas here to apply :o) ]
As of today, you can now create a partitioned table from a non-partitioned table by querying it and specifying the partition column. You'll pay for one full table scan on the original (non-partitioned) table. Note: this is currently in beta.
https://cloud.google.com/bigquery/docs/creating-column-partitions#creating_a_partitioned_table_from_a_query_result
To create a partitioned table from a query result, write the results to a new destination table. You can create a partitioned table by querying either a partitioned table or a non-partitioned table. You cannot change an existing standard table to a partitioned table using query results.
Until the new feature is rolled out in BigQuery, there is another (much cheaper) way to partition the table(s) by using Cloud Dataflow. We used this approach instead of running hundreds of SELECT * statements, which would have cost us thousands of dollars.
Create the partitioned table in BigQuery using the normal partition command
Create a Dataflow pipeline and use a BigQuery.IO.Read sink to read the table
Use a Partition transform to partition each row
Using a max of 200 shards/sinks at a time (any more than that and you hit API limits), create a BigQuery.IO.Write sink for each day/shard that will write to the corresponding partition using the partition decorator syntax - "$YYYYMMDD"
Repeat N times until all data is processed.
Here's an example on Github to get you started.
You still have to pay for the Dataflow pipeline(s), but it's a fraction of the cost of using multiple SELECT * in BigQuery.
If you have date sharded tables today, you can use this approach:
https://cloud.google.com/bigquery/docs/creating-partitioned-tables#converting_dated_tables_into_a_partitioned_table
If you have a single non-partitioned table to be converted to partitioned table, you can try the approach of running a SELECT * query with allow large results and using the table's partition as the destination (similar to how you'd restate data for a partition):
https://cloud.google.com/bigquery/docs/creating-partitioned-tables#restating_data_in_a_partition
Please note that this approach will charge you the scan cost of the source table for the query as many times as you query it.
We are working on something to make this scenario significantly better in the next few months.
CREATE TABLE `dataset.new_table`
PARTITION BY DATE(date_column)
AS SELECT * FROM `dataset.old_table`;
drop table `dataset.old_table`;
ALTER TABLE `dataset.new_table`
RENAME TO old_table;
I need to select data from one table and insert it into another table. Currently the SQL looks something like this:
INSERT INTO A (x, y, z)
SELECT x, y, z
FROM B b
WHERE ...
However, the SELECT is huge, resulting in over 2 millions rows and we think it is taking up too much memory. Informix, the db in this case, runs out of virtual memory when the query is run.
How would I go about selecting and inserting a set of rows (say 2000)? Given that I don't think there are any row ids etc.
You can do SELECT FIRST n * from Table. Where n is the amount of rows you want, say 2000. Also, in the WHERE clause do an embedded select that checks the table you are inserting in to for rows already existing. So that the next time the statement is ran, it will not include already inserted data.
I assume that you have some script that this is executed from? You can just loop and limit as long as you order the values returned from the nested select. Here is some pseudo code.
total = SELECT COUNT(x) FROM B WHERE ...
while (total > 0)
INSERT INTO A (x, y, z) SELECT x, y, z FROM B b WHERE ... ORDER BY x LIMIT 2000
total = total - 2000
end
I'm almost certain that IDS only lets you use the FIRST clause where the data is returned to the client1, and that is something you want to avoid if at all possible.
You say you get an out of memory error (rather than, say, a long transaction aborted error)? Have you looked at the configuration of your server to ensure it has a reasonable amount of memory?
It depends in part on how big your data set is, and what the constraints are - why you are doing the load across tables. But I would normally aim to determine a way of partitioning the data into loadable subsets and run those sequentially in a loop. For example, if the sequence numbers are between 1 and 10,000,000, I might run the loop ten times, with condition on the sequence number for AND seqnum >= 0 AND seqnum < 1000000' and thenAND seqnum >= 1000000 AND seqnum < 2000000', etc. Preferably in a language with the ability to substitute the range via variables.
This is a bit nuisancy, and you want to err on the conservative side in terms of range size (more smaller partitions rather than fewer bigger ones - to reduce the risk of running out of memory).
1 Over-simplifying slightly. A stored procedure would have to count as 'the client', for example, and the communication cost in a stored procedure is (a lot) less than the cost of going to the genuine client.