Context
I have an ETL process that keeps overwritng all rows of a table in bq by deleting all first then inserting new ones. I'm looking for a data back up design that can be triggered regularly on that table.
Issue
I'm concerned about the cost implications of using snapshots for this kind of table.
What exactly am I worried about?
On each drop and recreation of the base table, the new data has many rows that are identical to previous row, some new rows and some updated rows. However, the data gets inserted in a different sort order each time.
So when bq is creating a snapshot, by looking for rows that have changed, will it know that that some previous rows are still in the base table and have only changed position in order to avoid increased storage costs on the snapshot?
Have you thought about using merge statements?
These can deal with inserts, updates and even deletes in one query.
An example here https://querystash.com/query/62cf51097d57d7579954c0d418afc063
Related
I have created a table for reporting purpose where I am storing data for about 50 columns and at some time interval my scheduler executes a service which processes other tables and fill up data in my flat table.
Currently I am deleting and inserting data in that table But I want to know if this is the good practice or should I check every column in every row and update it if any change found and insert new record if data does not exists.
FYI, total number of rows which are being reinserted is 100k+.
This is a very broad question that can only really be answered with access to your environment and discussion on your personal requirements. Obviously this is not possible via Stack Overflow.
This means you will need to make this decision yourself.
The information you need to understand to be able to do this are the types of table updates available and how you can achieve them, normally referred to as Slowly Changing Dimensions. There are several different types, each with their own advantages, disadvantages and optimal use cases.
Once you understand the how of getting your data to incrementally update as required, you can then look at the why and whether the extra processing logic required to achieve this is actually worth it. Your dataset of a few hundred thousand rows of data is not large and probably may therefore not need this level of processing just yet, though that assessment will depend on how complex and time consuming your current process is and how long you have to run it.
It is probably faster to repopulate the table of 100k rows. To do an update, you still need to:
generate all the rows to insert
compare values in every row
update the values that have changed
The expense of updating rows is heavily on the logging and data movement operations at the data page level. In addition, you need to bring the data together.
If the update is updating a significant portion of rows, perhaps even just a few percent of them, then it is likely that all data pages will be modified. So the I/O is pretty similar.
When you simply replace the table, you will start by either dropping the table or truncating it. Those are relatively cheap operations because they are not logged at the row level. Then you are inserting into the table. Inserting 100,000 rows from one table to another should be pretty fast.
The above is general guidance. Of course, if you are only changing 3 rows in the table each day, then update is going to be faster. Or, if you are adding a new layer of data each day, then just an insert, with a handful of changed historical values might be a fine approach.
I have a spark job that gets data from multiple sources and aggregates into one table. The job should update the table only if there is new data.
One approach I could think of is to fetch the data from the existing table, and compare with the new data that comes in. The comparison happens in the spark layer.
I was wondering if there is any better way to compare, that can improve the comparison performance.
Please let me know if anyone has a suggestion on this.
Thanks much in advance.
One approach I could think of is to fetch the data from the existing
table, and compare with the new data that comes in
IMHO entire data compare to load new data is not performant.
Option 1:
Instead you can create google-bigquery partition table and create a partition column to load the data and also while loading new data you can check whether the new data has same partition column.
Hitting partition level data in hive or bigquery is more useful/efficient than selecting entire data and comparing in spark.
Same is applicable for hive as well.
see this Creating partitioned tables
or
Creating and using integer range partitioned tables
Option 2:
Another alternative is with GOOGLE bigquery we have merge statement, if your requirement is to merge the data with out comparision, then you can go ahead with MERGE statement .. see doc link below
A MERGE statement is a DML statement that can combine INSERT, UPDATE, and DELETE operations into a single statement and perform the operations atomically.
Using this, We can get performance improvement because all three operations (INSERT, UPDATE, and DELETE) are performed in one pass. We do not need to write an individual statement to update changes in the target table.
There are many ways this problem can be solved, one of the less expensive, performant and scalable way is to use a datastore on the file system to determine true new data.
As data comes in for the 1st time write it to 2 places - database and to a file (say in s3). If data is already on the database then you need to initialize the local/s3 file with table data.
As data comes in 2nd time onwards, check if it is new based its presence on local/s3 file.
Mark delta data as new or updated. Export this to database as insert or update.
As time goes by this file will get bigger and bigger. Define a date range beyond which updated data won’t be coming. Regularly truncate this file to keep data within that time range.
You can also bucket and partition this data. You can use deltalake to maintain it too.
One downside is that whenever database is updated this file may need to be updated based on relevant data is being Changed or not. You can maintain a marker on the database table to signify sync date. Index that column too. Read changed records based on this column and update the file/deltalake.
This way your sparl app will be less dependent on a database. The database operations are not very scalable so keeping them away from critical path is better
Shouldnt you have a last update time in you DB? The approach you are using doesnt sound scalable so if you had a way to set update time to each row in the table it will solve the problem.
Twice a day, I run a heavy query and save the results (40MBs worth of rows) to a table.
I truncate this results table before inserting the new results such that it only ever has the latest query's results in it.
The problem, is that while the update to the table is written, there is technically no data and/or a lock. When that is the case, anyone interacting with the site could experience an interruption. I haven't experienced this yet, but I am looking to mitigate this in the future.
What is the best way to remedy this? Is it proper to write the new results to a table named results_pending, then drop the results table and rename results_pending to results?
Two methods come to mind. One is to swap partitions for the table. To be honest, I haven't done this in SQL Server, but it should work at a low level.
I would normally have all access go through a view. Then, I would create the new day's data in a separate table -- and change the view to point to the new table. The view change is close to "atomic". Well, actually, there is a small period of time when the view might not be available.
Then, at your leisure you can drop the old version of the table.
TRUNCATE is a DDL operation which causes problems like this. If you are using snapshot isolation with row versioning and want users to either see the old or new data then use a single transaction to DELETE the old records and INSERT the new data.
Another option if a lot of the data doesn't actually change is to UPDATE / INSERT / DELETE only those records that need it and leave unchanged records alone.
I have been trying to solve a problem where two concurrent updates on the same table are causing additional records to be created/inserted. Never experienced this in any other relational database, and nor would i. So i believe it's potential a quirk in redshifts architecture of distributing queries across multiple nodes, however cannot pinpoint or provide a real world example.
Before these two updates are run, i insert new data into the table. The insert contains a daily snapshot that fills out one day of data, most columns have empty values ready for the updates to populate them.
The updates are run concurrently, which are simple update sql's, updating their respective columns. If run individually i do not see additional records created and no duplication.
The updates operate across the entire table, over 200 million records, however the duplication occurs only in the records that where populated recently(the new data for that days period.
This is kind of a worry, as i would never assume an update would ever create new records, addition to the records created with the first insert.
What is even more bizzare is that the duplicate records hold different data.
I have checked to veryify that no other queries are running beyond the expected, by looking at redshifts query logs (stl_query).
I find really hard to believe that an update created new values, are you really sure about this?
I've been trough complicate situations when It comes to concurrent transactions on the same table, so what I suggest is that you explicitly lock your table with:
lock table <table> in exclusive mode;
before you manipulate it (exclusive mode will allow reads but any write attempt will have to wait)
If you don't and 2 transactions try to update (Inserts are fine, BTW) the same table, you are most likely yo get a "ERROR: 1023 - DETAIL: Serializable isolation violation on table" - or the behavior you are reporting
I have a database that keeps record history. For each update to a record, the system will "deactivate" the previous record (along with all it's children), by setting the "Status" column to "0".
Now it's not a problem yet...but eventually this system is going to have a lot of records, and history is more important than speed right now. But the more records inserted, the slower searches become.
What is the best approach to archive the records? I've had suggestions to create a cloned archive database to hold the data. I've also had the idea to storing all previous records into a xml file, that can be read / loaded later if we need to dig up archived records.
You could create a separate partition containing only the active record if your DBMS supports it. You can also add an index to Status so that the select ... from tbl where status=1 isn't incredibly slow.
http://msdn.microsoft.com/en-us/library/ms187802.aspx