I want to find when the last INSERT, UPDATE or DELETE statement was performed on a table (for now, in the future I want to do this in multiple tables) in an Oracle database.
I created a table and then I updated one of its rows. Now I've the following query:
SELECT SCN_TO_TIMESTAMP(ora_rowscn) from test_table;
This query returns the timestamps of each row, and for each of them it gives the time when they were first created.
But the row that I've updated have the same timestamp as the others. Why? Shouldn't the timestamp be updated?
ORA_ROWSCN is not the right solution for this. It is not necessarily reliable at the row level. Moreover, it's not going to be useful at all for deleted rows.
If you have a real need to know when DML changes were made to a table, you should look at Oracle's auditing feature.
An alternative is to use triggers to record when changes are made to the table. Since you say you only care about the time of the most recent change, you can just create a single-column table to record the time, and write a trigger that fires on any DML statement to maintain it. If you're doing this in a production environment or even just in one where more than one session might be modifying the table, you'd want to think about how it should work when concurrent changes are made. You could force the table to have at most one row, but that would serialize every change to the table. You could allow each session to insert a separate row and take the max value when querying it, but then you probably want to think about clearing out old rows from time to time.
Related
I have a large table whose rows get updated/inserted/merged periodically from a few different queries. I need a scheduled process to run (via API) to periodically check for which rows in that table were updated since the last check. So here are my issues...
When I run the merge query, I don't see a way for it to return which records were updated... otherwise, I could be copying those updated rows to a special updated_records table.
There are no triggers so I can't keep track of mutations that way.
I could add a last_updated timestamp column to keep track that way, but then repeatedly querying the entire table all day for that would be a huge amount of data billed (expensive).
I'm wondering if I'm overlooking something obvious or if maybe there's some kind of special BQ metadata that could help?
The reason I'm attempting this is that I'm wanting to extract and synchronize a smaller subset of this table into my PostgreSQL instance because the latency for querying BQ is just too much for smaller queries.
Any ideas? Thanks!
One way is to periodically save intermediate state of the table using the time travel feature. Or store only the diffs. I just want to leave this option here:
FOR SYSTEM_TIME AS OF references the historical versions of the table definition and rows that were current at timestamp_expression.
The value of timestamp_expression has to be within last 7 days.
The following query returns a historical version of the table from one hour ago.
SELECT * FROM table
FOR SYSTEM_TIME AS OF TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR);
The following query returns a historical version of the table at an absolute point in time.
SELECT * FROM table
FOR SYSTEM_TIME AS OF '2017-01-01 10:00:00-07:00';
An approach would be to have 3 tables:
one basetable in "append only" mode, only inserts are added, and updates as full row, in this table would be every record like a versioning system.
a table to hold deletes (or this can be incorporated as a soft delete if there is a special column kept in the first table)
a livetable where you hold the current data (in this table you would do your MERGE statements most probably from the first base table.
If you choose partitioning and clustering, you could end up leverage a lot for long time storage discounted price and scan less data by using partitioning and clustering.
If the table is large but the amount of data updated per day is modest then you can partition and/or cluster the table on the last_updated_date column. There are some edge cases, like the first today's check should filter for last_updated_date being either today or yesterday.
Depending of how modest this amount of data updated throughout a day is, even repeatedly querying the entire table all day could be affordable because BQ engine will scan one daily partition only.
P.S.
Detailed explanation
I could add a last_updated timestamp column to keep track that way
I inferred from that the last_updated column is not there yet (so the check-for-updates statement cannot currently distinguish between updated rows and non-updated ones) but you can modify the table UPDATE statements so that this column will be added to the newly modified rows.
Therefore I assumed you can modify the updates further to set the additional last_updated_date column which will contain the date portion of the timestamp stored in the last_updated column.
but then repeatedly querying the entire table all day
From here I inferred there are multiple checks throughout the day.
but the data being updated can be for any time frame
Sure, but as soon as a row is updated, no matter how old this row is, it will acquire two new columns last_updated and last_updated_date - unless both columns have already been added by the previous update in which cases the two columns will be updated rather than added. If there are several updates to the same row between the update checks, then the latest update will still make the row to be discoverable by the checks that use the logic described below.
The check-for-update statement will (conceptually, not literally):
filter rows to ensure last_updated_date=today AND last_updated>last_checked. The datetime of the previous update check will be stored in last_checked and where this piece of data is held (table, durable config) is implementation dependent.
discover if the current check is the first today's check. If so then additionally search for last_updated_date=yesterday AND last_updated>last_checked.
Note 1If the table is partitioned and/or clustered on the last_updated_date column, then the above update checks will not cause table scan. And subject to ‘modest’ assumption made at the very beginning of my answer, the checks will satisfy your 3rd bullet point.
Note 2The downside of this approach is that the checks for updates will not find rows that had been updated before the table UPDATE statements were modified to include the two extra columns. (Such rows will be in the__NULL__ partition with rows that never were updated.) But I assume until the changes to the UPDATE statements are made it will be impossible to distinguish between updated rows and non-updated ones anyway.
Note 3 This is an explanatory concept. In the real implementation you might need one extra column instead of two. And you will need to check which approach works better: partitioning or clustering (with partitioning on a fake column) or both.
The detailed explanation of the initial (e.g. above P.S.) answer ends here.
Note 4
clustering only helps performance
From the point of view of table scan avoidance and achieving a reduction in the data usage/costs, clustering alone (with fake partitioning) could be as potent as partitioning.
Note 5
In the comment you mentioned there is already some partitioning in place. I’d suggest to examine if the existing partitioning is indispensable, can it be replaced with clustering.
Some good ideas posted here. Thanks to those who responded. Essentially, there are multiple approaches to tackling this.
But anyway, here's how I solved my particular problem...
Suppose the data needs to ultimately end up in a table called MyData. I created two additional tables, MyDataStaging and MyDataUpdate. These two tables have an identical structure to MyData with the exception of MyDataStaging has an additional Timestamp field, "batch_timestamp". This timestamp allows me to determine which rows are the latest versions in case I end up with multiple versions before the table is processed.
DatFlow pushes data directly to MyDataStaging, along with a Timestamp ("batch_timestamp") value indicating when the process ran.
A scheduled process then upserts/merges MyDataStaging to MyDataUpdate (MyDataUpdate will now always contain only a unique list of rows/values that have been changed). Then the process upserts/merges from MyDataUpdate into MyData as well as being exported & downloaded to be loaded into PostgreSQL. Then staging/update tables are emptied appropriately.
Now I'm not constantly querying the massive table to check for changes.
NOTE: When merging to the main big table, I filter the update on unique dates from within the source table to limit the bytes processed.
I need to get the latest modified time of a table, so came across
select relfilenode from pg_class where relname = 'test';
which gives me the relfilenode id, this seems to be a directory name in
L:\Databases\PostgresSQL\data\base\inodenumber
For which I later extract the latest modified time.
Is this the right way to do this or are there any better methods to do the same
Testing the mtime of the table's relfilenode won't work well. As Eelke noted VACUUM among other operations will modify the timestamp. Hint bit setting will modify the table too, causing it to appear to be "modified" by a SELECT. Additionally, sometimes a table has more than one fork to its on-disk relation (1GB chunks), and you'd have to check all of them to find the most recent.
If you want to keep a last modified time for a table, add an AFTER INSERT OR UPDATE OR DELETE OR TRUNCATE ... FOR EACH STATEMENT trigger that updates a timestamp row in a table you use for tracking modification times.
The downside of the trigger is that it'll contest a single row lock on the table, so it'll serialize all your transactions. It'll also greatly increase the chance of getting deadlocks. What you really want is probably something nontransactional that doesn't have to roll back when the transaction does, where if multiple transactions update a counter the highest value wins. There's nothing like that built in, though it might not be too hard as a C extension.
A slightly more sophisticated option would be to create a trigger that uses dblink to do an update of the last-updated counter. That'll avoid most of the contention problems but it'll actually make deadlocking worse because PostgreSQL's deadlock detection won't be able to "see" the fact that the two sessions are deadlocked via an intermediary. You'd need a way to SELECT ... FOR UPDATE with a timeout to make it reliable without aborting transactions too often.
In any case, a trigger won't catch DDL, though. DDL triggers ("Event triggers") are coming in Pg 9.3.
See also:
How do I find the last time that a PostgreSQL database has been updated?
How to get 'last modified time' of the table in postgres?
I don't think that would be completly reliable as a vacuum would also modify the file(s) containing the table but the logical content of the table does not change during a vacuum.
You could create triggers for INSERT, UPDATE and DELETE that maintain the last modified timestamp for each table in another table. This method has a slight performance impact but will provide accurate information.
I have a problem that I haven't been able to come up with a solution for yet. I have a database (actually thousands of them at customer sites) that I want to extract data from periodically. I'd like to do a full data extract one time (select * from table) then after that only get rows that have changed.
The challenge is that there aren't any updated date columns in most of the tables that could be used to constrain the SQL query. I can't use a trigger based approach nor change the application that writes to the database since it's another group that develops the app and they are way backed up already.
I may be able to write to the database tables when doing the data extract, but would prefer not to do that. Does anyone have any ideas for how we might be able to do this?
You will have to programatically mark the records. I see suggestions of an auto-incrementing field but that will only get newly inserted records. How will you track updated or deleted records?
If you only want newly inserted that an autoincrementing field will do the job; in subsequent data dumps grab every thing since the last value of the autoincrment field and then recrod the current value.
If you want updates the minimum I can see is to have a last_update field and probably a trigger to populare it. If the last_update is later the the last data dump grab that record. This will get inserts and updates but not deletes.
You could try something like a 'instead of delete' trigger if your RDBMS supports it and NULL the last_update field. On subsequent data dumps grap all recoirds where this field is NULL and then delete them. But there would be problems with this (e.g. how to stop the app seeing them between the logical and physical delete)
The most fool proof method I can see is aset of history (audit) tables and ech change gets written to them. Then you select your data dump from there.
By the way do you only care about know the updates have happened? What about if 2 (or more) updates have happened. The history table is the only way that I can see you capturing this scenario.
This should isolate rows that have changed since your last backup. Assuming DestinationTable is a copy of SourceTable even on the key fields; if not you could list out the important fields.
SELECT * FROM SourceTable
EXCEPT
SELECT * FROM DestinationTable
In the past I have just added an field to each table and updated it with GETDATE() on every update/insert. The problem is now I have to keep track of delete too. I was thinking of just having a table that I would update when anything changed and add a trigger to all of the other tables. Ideas??? Thanks!
If you have a history table (A table with the same columns as the original table, plus an auto-increment ID column), you can track everything about changes to the original table. You can track inserts, deletes, and every change. Use triggers for insert, update, and deletes to put a row into the history table. If you don't need all these options, then use those that you do need.
If you choose to use an IsDeleted flag in the original table, it complicates every query, and leaves your active table with lots of unneeded rows. But that can work, depending on your needs.
I've seen tables designed with a bit field as IsDeleted and default value of course is set to false. When an item is deleted this value is set to true. All queries would then need to take this into affect:
SELECT blah FROM myTable WHERE IsDeleted=0
This way if you "accidentally" deleted a row, you should be able to bring it back. You could also purge records on say a weekly / monthly / yearly basis.
That is just an idea for you.
If you are using SQL Server 2008, you can take advantage of the new auditing features.
Flag the records as deleted=1 and do not delete it. Do a trigger on delete instead update...
I've also seen a duplicate table with a standardized prefix added to the name. All the deleted rows are moved to the duplicate table. This removes the overhead of keeping but ignoring the rows in the original table.
All actions (insert - update - delete) should be logged in a journalling table. I always log the action, timestamp and user who triggered the action. Adding an Isdelete column to the original table is bad practice.
If you are using SQL 2008 then you can use CDC(Change Data Capture) for the tracking.
The below link gives the full details. If you are enabling the cdc for particular table then automatically delete data's will be collected.
http://www.simple-talk.com/sql/learn-sql-server/introduction-to-change-data-capture-%28cdc%29-in-sql-server-2008/
I have a trigger on INSERT in MySQL 5.1. I want to know, how many times per second is it called. How can I do this?
Your best bet is to keep inserting into a table.
INSERT INTO trigger_log(query) VALUES(?)
This table has a datetime column that will automatically be updated, then you can do various queries to determine how many times/minute or hour, what period had the highest number of calls, etc.
Otherwise just update a table that has a column for day, hour, min, counter and just increment the counter for the current day/hour/min.
I don't like the second one as much as there is so much potential information being lost, but it would do what you want also.
There is no way to directly cound the number of triggers on the inserts. You could analyse the logfiles or you could alter your trigger (as the trigger acts on insert) to write an entry in a log table with auto_increment id and datetime. You can then analyze this table for any statistics.