This table is partitioned by hour:
SELECT *
FROM `blockchain-etl-internal.crypto_ethereum_partitioned.logs_by_topic_0xd78`
WHERE block_timestamp >= '2020-11-14 00:00:00' and block_timestamp < '2020-11-14 01:00:00'
ORDER BY block_timestamp DESC
But whatever filter on the block_timestamp I specify BigQuery scans the entire table. You can see that table size and the amount of data scanned in a query to make sure.
Isn't BigQuery supposed to only scan data in partitions that are filtered out?
This is because all rows in the table is still in the UNPARTITIONED partition and has not been repartitioned into their corresponding partitions. Repartitioning is triggered only when there's enough data (byte size is at least a certain threshold) (https://cloud.google.com/bigquery/streaming-data-into-bigquery#streaming_into_partitioned_tables).
At the moment, this threshold is set at 5gb, while the table has around 400mb as you stated.
Related
The following problem occurred in our project, which we cannot solve.
We have a huge data of our logs, and we go to ClickHouse from MongoDB.
Our table is created like this:
CREATE TABLE IF NOT EXISTS logs ON CLUSTER default (
raw String,
ts DateTime64(6) MATERIALIZED toDateTime64(JSONExtractString(raw, 'date_time'), 6),
device_id String MATERIALIZED JSONExtractString(raw, 'device_id'),
level Int8 MATERIALIZED JSONExtractInt(raw, 'level'),
context String MATERIALIZED JSONExtractString(raw, 'context'),
event String MATERIALIZED JSONExtractString(raw, 'event'),
event_code String MATERIALIZED JSONExtractInt(raw, 'event_code'),
data String MATERIALIZED JSONExtractRaw(raw, 'data'),
date Date DEFAULT toDate(ts),
week Date DEFAULT toMonday(ts)
)
ENGINE ReplicatedReplacingMergeTree()
ORDER BY (device_id, ts)
PARTITION BY week
and I'm running a query like so
SELECT device_id,toDateTime(ts),context,level,event,data
FROM logs
WHERE device_id = 'some_uuid'
ORDER BY ts DESC
LIMIT 10
OFFSET 0;
this is the result 10 rows in set. Elapsed: 6.23 sec.
And second without order, limit and offset:
SELECT device_id,toDateTime(ts),context,level,event,data
FROM logs
WHERE device_id = 'some_uuid'
this is the result Elapsed: 7.994 sec. for each 500 rows of 130000+
Is too slow.
Seems that CH process all the rows in the table. What is wrong and what need to improve the speed of CH?
The same implementation on MongoDB takes 200-500ms max
Egor! When you mentioned, "we go to ClickHouse from MongoDB", did you mean you switched from MongoDB to ClickHouse to store your data? Or you somehow connect to ClickHouse from MongoDB to run queries you're referring to?
I'm not sure how do you ingest your data, but let's focus on the reading part.
For MergeTree family ClickHouse writes data in parts. Therefore, it is vital to have a timestamp as a part of your where clause, so ClickHouse can determine which parts you want to read and skip most of the data you don't need. Otherwise, it will scan all the data.
I would imagine these queries will do the scan faster:
SELECT device_id,toDateTime(ts),context,level,event,data
FROM logs
WHERE device_id = 'some_uuid' AND week = '2021-07-05'
ORDER BY ts DESC
LIMIT 10
OFFSET 0;
SELECT device_id,toDateTime(ts),context,level,event,data
FROM logs
WHERE device_id = 'some_uuid' AND week = '2021-07-05';
AFAIK, unless you specified the exact partition format, CH will use partitioning by month (ie toYYYYMM()) for your CREATE TABLE statement. You can check that by looking at system.parts table:
SELECT
partition,
name,
active
FROM system.parts
WHERE table = 'logs'
So, if you want to store data in weekly parts, I would imagine partitioning could be like
...
ORDER BY (device_id, ts)
PARTITION BY toMonday(week)
This is also a good piece of information: Using Partitions and Primary keys in queries
I have a database that stores data from sensors in a factory. The DB contains about 1.6 million rows per sensor per day. I have the following index on the DB.
CREATE INDEX sensor_name_time_stamp_index ON sensor_data (time_stamp, sensor_name);
I will be running the following query once per day.
SELECT
time_stamp, value
FROM
(SELECT
time_stamp,
value,
lead(value) OVER (ORDER BY value DESC) as prev_result
FROM
sensor_data WHERE time_stamp between '2021-02-24' and '2021-02-25' and sensor_name = 'sensor8'
ORDER BY
time_stamp DESC) as result
WHERE
result.value IS DISTINCT FROM result.prev_result
ORDER BY
result.time_stamp DESC;
The query returns a list of rows where the value is different from the previous row.
This query takes about 23 seconds to run.
Running on PostgreSQL 10.12 on Aurora serverless.
Questions: Besides the index, are there any other optimisations that I can perform on the DB to make the query run faster?
To support the query optimally, the index must be defined the other way around:
CREATE INDEX ON sensor_data (sensor_name, time_stamp);
Otherwise, PostgreSQL will have to read all index values in the time interval, then discard the ones for the wrong sensor, then fetch the rows from the table.
With the proper column order, only the required rows are scanned in the index.
You asked for other optimizations: Since you have to sort rows, increasing work_mem can be beneficial. Other than that, more memory and faster storage will definitely not harm.
I have a table Partitioned by:
HASH(timestamp DIV 43200 )
When I perform this query
SELECT max(id)
FROM messages
WHERE timestamp BETWEEN 1581708508 AND 1581708807
it scans all partitions while both numbers 1581708508 & 1581708807& numbers between them are in the same partition, how can I make it to scan only that partition?
You have discovered one of the reasons why PARTITION BY HASH is useless.
In your situation, the Optimizer sees the range (BETWEEN) and says "punt, I'll just scan all the partitions".
That is, "partition pruning" does not work when the WHERE clause involves a range and you are using PARTITION BY HASH. PARTITION BY RANGE, on the other hand, may be able to prune. But... What's the advantage? It does not make the query any faster.
I have found only four uses for partitioning: http://mysql.rjweb.org/doc.php/partitionmaint . It sounds like your application does not fit any of those cases.
That particular query would best be done without partitioning. Instead have a non-partitioned table with this 'composite' index:
INDEX(timestamp, id)
It must scan all the rows to discover the MAX(id), but with this index, it is
Scanning only the 2-column index
Not touching any rows outside the timestamp range.
Hence it will be as fast as possible. Even if PARTITION BY HASH were smart enough to do the desired pruning, it would not run any faster.
In particular, when you ask for a range on the Partition key, such as with WHERE timestamp BETWEEN 1581708508 AND 1581708807, the execution looks in all partitions for the desired rows. This is one of the major failings of Hash. Even if it could realize that only Partition is needed, it would be no faster than simply using the index I suggest.
You can determine that individual partition by using modular arithmetic
MOD(<formula which's argument of hash function>,<number of partitions>)
assuming you have 2 partitions
CREATE TABLE messages(ID int, timestamp int)
PARTITION BY HASH( timestamp DIV 43200 )
PARTITIONS 2;
look up partition names by
SELECT CONCAT( 'p',MOD(timestamp DIV 43200,2)) AS partition_name, timestamp
FROM messages;
and determine the related partition name for the value 1581708508 of timestamp column (assume p1). Then Use
SELECT MAX(id)
FROM messages PARTITION(p1)
to get all the records only in the partition p1 without need of a WHERE condition such as
WHERE timestamp BETWEEN 1581708508 AND 1581708807
Btw, all partitions might be listed through
SELECT *
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE table_name='messages'
Demo
I have a view that works to pull the most recent data for a Hive history table. The history table is partitioned by day. The way that the view works is very straightforward—it has a subquery that does a max date on the date field (the one that is used as the partition) then filters the table based upon that value. The table contains hundreds of days (partitions), each with many millions of rows. In order to speed up the subquery, I am attempting to limit the partitions that are scanned to the last one created. To account for holiday weekends, I'm going back four days to ensure that the query returns data.
If I hard code the values with dates, the subquery runs very fast, and limits to the partitions correctly.
However, if I attempt to limit the partitions with a subquery to calculate the last partition, it doesn’t recognize the partitions and does a full table scan. The query will return correct results, as the filter works, but it takes a long time because it is not limiting the partitions scanned.
I tried doing the subquery as a WITH statement, then using an INNER JOIN on bus_date, but got the same results—partitions were not utilized.
The behavior is repeatable via a query, so I’ll use that rather than the view to demonstrate:
SELECT *
FROM a.transactions
WHERE bus_date IN (SELECT MAX (bus_date)
FROM a.transactions maxtrans
WHERE bus_date >= date_sub (CURRENT_DATE, 4));
There are no error messages, and the query actually works (filters to pull the correct data), but it scans all partitions so it is extremely slow. How can I limit the query to utilize the partitions identified in the subquery?
I'm still hopeful that someone will have an answer for this, but I did want to post the workaround that I've come up with in case it is useful for someone else.
SELECT *
FROM a.transactions
WHERE bus_date >= date_sub (CURRENT_DATE, 4)
AND bus_date IN (SELECT MAX (bus_date)
FROM a.transactions maxtrans
WHERE bus_date >= date_sub (CURRENT_DATE, 4));
The query is a little clumsy, as it is filtering on the business date twice. The first time it limits the main set of data to the last four days (which limits to those partitions and avoids a scan of all partitions) and the second pins it down to the last day for which data has been loaded (via the MAX bus_date). This is far from perfect, but performs CONSIDERABLY better than the query scanning all partitions. Thanks.
We have a table partitioned by day in BigQuery, which is updated by streaming inserts.
The doc says that: "when streaming to a partitioned table, data in the streaming buffer has a NULL value for the _PARTITIONTIME pseudo column"
But if I query for select count(*) from table where _PARTITIONTIME is NULL it always returns 0, even though bq show tells me that there are a lot of rows in the streaming buffer.
Does this mean that the pseudo column is not present at all for rows in streaming buffer? In any case, how can I query for the data ONLY in the streaming buffer without it becoming a full table scan?
Thanks in advance
Data in the streaming buffer has a NULL value for the _PARTITIONTIME column.
SELECT
fields
FROM
`dataset.partitioned_table_name`
WHERE
_PARTITIONTIME IS NULL
https://cloud.google.com/bigquery/docs/partitioned-tables#copying_to_partitioned_tables
When you stream data to BQ you usually have the "warming-up" period and that's the time it takes for the streamed data to become available for operations such as querying, copying and exporting.
The doc states in the end that after a period of up to 90 mins the pseudo-column _PARTITIONTIME receives a non-null value, which means your streamed data is fully ready for any type of operation you want to run on the data (being able to run queries usually takes a few seconds).
That means that you don't query partitioned tables looking for when this field is null but instead, you do like so:
SELECT
fields
FROM
`dataset.partitioned_table_name`
WHERE
_PARTITIONTIME = TIMESTAMP('2017-01-20')
In this example, you would be querying only data streamed in the dates partition Jan/20 (which avoids a full table scan).
You can also select for a range of dates, you would just have to change the WHERE clause to:
WHERE _PARTITIONTIME BETWEEN TIMESTAMP('2017-01-20') AND TIMESTAMP('2017-01-22')
Which would query for 2 days in your table.