I'm currently reading over implementing optimistic concurrency checks in DB2. I've been mainly reading http://www.ibm.com/developerworks/data/library/techarticle/dm-0801schuetz/ and http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.dbobj.doc%2Fdoc%2Fc0051496.html (as well as some other IBM docs).
Is RID necessary when you have an ID column already? In the 2 links they always mention use RID and row change version, however RID is row ID, so I'm not clear why I need to use it when row change token seems like SQLServer's rowversion (except for the page and not the row).
It seems as long as I have a row-change-timestamp column, then my row change token granularity will be good enough to prevent most false positives.
Thanks.
The way I read the first article is that you can use any of those features, you don't need to use all of them. In particular, it appears that the row-change-timestamp is derived from RID() and ROW CHANGE TOKEN:
Time-based update detection:
This feature is added to SQL using the
RID_BIT() and ROW CHANGE TOKEN. To support this feature, the table
needs to have a new generated column defined to store the timestamp
values. This can be added to existing tables using the ALTER TABLE
statement, or the column can be defined when creating a new table. The
column's existence, also affects the behavior of optimistic locking in
that the column if it is used to improve the granularity of the ROW
CHANGE TOKEN from page level to row level, which could greatly benefit
optimistic locking applications.
... among other things, the timestamp actually increases the granularity compared to the ROW CHANGE TOKEN, so it makes it easier to deal with updates.
For a number of reasons, please make sure to set the db time to UTC, as DB2 doesn't track timezone (so if you're somewhere that uses DST, the same timestamp can happen twice).
(As a side note, RID() isn't stable on all platforms. On the iSeries version, at least, it changes if somebody re-orgs the table, and you may not always get the results you expect when using it with joins. I'm also not sure about use with mirroring...)
Are you aware that if you update multiple rows in the same SQL statement execution, they will get the same timestamp (if the timestamp is updated in that statement)?
This means that a timestamp column is probably a bad choice for a unique row identifier.
Related
Typically to expose version data you'd have to add a column of type rowversion, but this operation would take quite a while on a large table. I did it anyway in a dev sandbox environment, and indeed it took a while, but I also noticed that the column was populated with some meaningful-looking initial value. I expected it to be all 0's or 1's to indicate that each row is in some sort of "initial" state (after all, there was no history before this), but what I saw were what looked like accurate values for each row (they were all different, non-default-looking values).
Where did they come from? It seems like the rowversion is being tracked behind the scenes anyway, regardless of whether you've exposed it in a column. If so, can I get at it directly without adding the column? Like maybe some kind of system function I can call directly? I really want to avoid downtime, and I also have a huge number of existing queries so migration to a different table/view/combo is not an option (as suggested in other related questions).
The rowversion value is generated when a table with a rowversion (a.k.a timestamp) value is modified. The rowversion value is database-scoped and the last generated value can be retrieved via ##DBTS.
Since the value is incremented only when a rowversion table is modified, I don't think you'll be able to use ##DBTS to avoid the downtime.
I need to do a cursor update of a table (milions of rows). The script should resume from last updated row if it would be started again (e.g. in case of a server restart).
What is the best way to resolve this? Create a new table with the last saved id? Use the tables extended propertes to save this info?
I would add an "UpdateDate" or "LastProcessDate" or some similarly named datetime column to your table and use this. When running your update, simply process any number of records that aren't the max UpdateDate or are null:
where UpdateDate < (select max(UpdateDate) from MyTable) or UpdateDate is null
It's probably a good idea to grab the max UpdateDate (#maxUpdateDate?) at the beginning of your process/loop so it does not change during a batch, and similarly get a new UpdateDate (#newUpdateDate?) at the beginning of your process to update each row as you go. A UTC date will work best to avoid DST time changes.
This data would now be a real attribute of your entity, not metadata or a temporary placeholder, and this would seem to be the best way to be transactionally consistent and otherwise fully ACID. It would also be more self-documenting than other methods, and can be indexed should the need arise. A date can also hold important temporal information about your data, whereas IDs and flags do not.
Doing it in this way would make storing data in other tables or extended properties redundant.
Some other thoughts:
Don't use a temp table that can disappear in many of the scenarios where you haven't processed all rows (connection loss, server restart, etc.).
Don't use an identity or other ID that can have gaps filled, be reseeded, truncated back to 0, etc.
The idea of having a max value stored in another table (essentially rolling your own sequence object) has generally been frowned upon and shown to be a dubious practice in SQL Server from what I've read, though I'm oddly having trouble locating a good article right now.
If at all possible, avoid cursors in favor of batches, and generally avoid batches in favor of full set-based updates.
sp_updateextendedproperty does seem to behave correctly with a rollback, though I'm not sure how locking works with that -- just FYI if you ultimately decide to go down that path.
This question already has answers here:
How can I get a hash of an entire table in postgresql?
(7 answers)
Closed 9 years ago.
Suppose you have a reasonably large (for local definitions of “large”), but relatively stable table.
Right now, I want to take a checksum of some kind (any kind) of the contents of the entire table.
The naïve approach might be to walk the entire table, taking the checksum (say, MD5) of the concatenation of every column on each row, and then perhaps concatenate them and take its MD5sum.
From the client side, that might be optimized a little by progressively appending columns' values into the MD5 sum routine, progressively mutating the value.
The reason for this, is that at some point in future, we want to re-connect to the database, and ensure that no other users may have mutated the table: that includes INSERT, UPDATE, and DELETE.
Is there a nicer way to determine if any change/s have occurred to a particular table? Or a more efficient/faster way?
Update/clarification:
We are not able/permitted to make any alterations to the table itself (e.g. adding a “last-updated-at” column or triggers or so forth)
(This is for Postgres, if it helps. I'd prefer to avoid poking transaction journals or anything like that, but if there's a way to do so, I'm not against the idea.)
Adding columns and triggers is really quite safe
While I realise you've said it's a large table in a production DB so you say you can't modify it, I want to explain how you can make a very low impact change.
In PostgreSQL, an ALTER TABLE ... ADD COLUMN of a nullable column takes only moments and doesn't require a table re-write. It does require an exclusive lock, but the main consequence of that is that it can take a long time before the ALTER TABLE can actually proceed, it won't hold anything else up while it waits for a chance to get the lock.
The same is true of creating a trigger on the table.
This means that it's quite safe to add a modified_at or created_at column and an associated trigger function to maintain them to a live table that's in intensive real-world use. Rows added before the column was created will be null, which makes perfect sense since you don't know when they were added/modified. Your trigger will set the modified_at field whenever a row changes, so they'll get progressively filled in.
For your purposes it's probably more useful to have a trigger-maintained side-table that tracks the timestamp of the last change (insert/update/delete) anywhere in the table. That'll save you from storing a whole bunch of timestamps on disk and will let you discover when deletes have happened. A single-row side-table with a row you update on each change using a FOR EACH STATEMENT trigger will be quite low-cost. It's not a good idea for most tables because of contention - it essentially serializes all transactions that attempt to write to the table on the row update lock. In your case that might well be fine, since the table is large and rarely updated.
A third alternative is to have the side table accumulate a running log of the timestamps of insert/update/delete statements or even the individual rows. This allows your client read the change-log table instead of the main table and make small changes to its cached data rather than invalidating and re-reading the whole cache. The downside is that you have to have a way to periodically purge old and unwanted change log records.
So... there's really no operational reason why you can't change the table. There may well be business policy reasons that prevent you from doing so even though you know it's quite safe, though.
... but if you really, really, really can't:
Another option is to use the existing "md5agg" extension: http://llg.cubic.org/pg-mdagg/ . Or to apply the patch currently circulating pgsql-hackers to add an "md5_agg" to the next release to your PostgreSQL install if you built from source.
Logical replication
The bi-directional replication for PostgreSQL project has produced functionality that allows you to listen for and replay logical changes (row inserts/updates/deletes) without requiring triggers on tables. The pg_receivellog tool would likely suit your purposes well when wrapped with a little scripting.
The downside is that you'd have to run a patched PostgreSQL 9.3, so I'm guessing if you can't change a table, running a bunch of experimental code that's likely to change incompatibly in future isn't going to be high on your priority list ;-) . It's included in the stock release of 9.4 though, see "changeset extraction".
Testing the relfilenode timestamp won't work
You might think you could look at the modified timestamp(s) of the file(s) that back the table on disk. This won't be very useful:
The table is split into extents, individual files that by default are 1GB each. So you'd have to find the most recent timestamp across them all.
Autovacuum activity will cause these timestamps to change, possibly quite a while after corresponding writes happened.
Autovacuum must periodically do an automatic 'freeze' of table contents to prevent transaction ID wrap-around. This involves progressively rewriting the table and will naturally change the timestamp. This happens even if nothing's been added for potentially quite a long time.
Hint-bit setting results in small writes during SELECT. These writes will also affect the file timestamps.
Examine the transaction logs
In theory you could attempt to decode the transaction logs with pg_xlogreader and find records that affect the table of interest. You'd have to try to exclude activity caused by vacuum, full page writes after hint bit setting, and of course the huge amount of activity from every other table in the entire database cluster.
The performance impact of this is likely to be huge, since every change to every database on the entire system must be examined.
All in all, adding a trigger on a table is trivial in comparison.
What about creating a trigger on insert/update/delete events on the table? The trigger could call a function that inserts a timestamp into another table which would mark the time for any table-changing event.
The only concern would be an update event updated using the same data currently in the table. The trigger would fire, even though the table didn't really change. If you're concerned about this case, you could make the trigger call a function that generates a checksum against just the updated rows and compares against a previously generated checksum, which would usually be more efficient than scanning and checksumming the whole table.
Postgres documentation on triggers here: http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html
If you simply just want to know when a table has last changed without doing anything to it, you can look at the actual file(s) timestamp(s) on your database server.
SELECT relfilenode FROM pg_class WHERE relname = 'your_table_name';
If you need more detail on exactly where it's located, you can use:
select t.relname,
t.relfilenode,
current_setting('data_directory')||'/'||pg_relation_filepath(t.oid)
from pg_class t
join pg_namespace ns on ns.oid = t.relnamespace
where relname = 'your_table_name';
Since you did mention that it's quite a big table, it will definitely be broken into segments, and toasts, but you can utilize the relfilenode as your base point, and do a ls -ltr relfilenode.* or relfilnode_* where relfilenode is the actual relfilenode from above.
These files gets updated at every checkpoint if something occured on that table, so depending on how often your checkpoints occur, that's when you'll see the timestamps update, which if you haven't changed the default checkpoint interval, it's within a few minutes.
Another trivial, but imperfect way to check if INSERTS or DELETES have occurred is to check the table size:
SELECT pg_total_relation_size('your_table_name');
I'm not entirely sure why a trigger is out of the question though, since you don't have to make it retroactive. If your goal is to ensure nothing changes in it, a trivial trigger that just catches an insert, update, or delete event could be routed to another table just to timestamp an attempt but not cause any activity on the actual table. It seems like you're not ensuring anything changes though just by knowing that something changed.
Anyway, hope this helps you in this whacky problem you have...
A common practice would be to add a modified column. If it were MySQL, I'd use timestamp as datatype for the field (updates to current date on each updade). Postgre must have something similar.
I have got a table which has an id (primary key with auto increment), uid (key refering to users id for example) and something else which for my question won’t matter.
I want to make, lets call it, different auto-increment keys on id for each uid entry.
So, I will add an entry with uid 10, and the id field for this entry will have a 1 because there were no previous entries with a value of 10 in uid. I will add a new one with uid 4 and its id will be 3 because I there were already two entried with uid 4.
...Very obvious explanation, but I am trying to be as explainative an clear as I can to demonstrate the idea... clearly.
What SQL engine can provide such a functionality natively? (non Microsoft/Oracle based)
If there is none, how could I best replicate it? Triggers perhaps?
Does this functionality have a more suitable name?
In case you know about a non SQL database engine providing such a functioality, name it anyway, I am curious.
Thanks.
MySQL's MyISAM engine can do this. See their manual, in section Using AUTO_INCREMENT:
For MyISAM tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
The docs go on after that paragraph, showing an example.
The InnoDB engine in MySQL does not support this feature, which is unfortunate because it's better to use InnoDB in almost all cases.
You can't emulate this behavior using triggers (or any SQL statements limited to transaction scope) without locking tables on INSERT. Consider this sequence of actions:
Mario starts transaction and inserts a new row for user 4.
Bill starts transaction and inserts a new row for user 4.
Mario's session fires a trigger to computes MAX(id)+1 for user 4. You get 3.
Bill's session fires a trigger to compute MAX(id). I get 3.
Bill's session finishes his INSERT and commits.
Mario's session tries to finish his INSERT, but the row with (userid=4, id=3) now exists, so Mario gets a primary key conflict.
In general, you can't control the order of execution of these steps without some kind of synchronization.
The solutions to this are either:
Get an exclusive table lock. Before trying an INSERT, lock the table. This is necessary to prevent concurrent INSERTs from creating a race condition like in the example above. It's necessary to lock the whole table, since you're trying to restrict INSERT there's no specific row to lock (if you were trying to govern access to a given row with UPDATE, you could lock just the specific row). But locking the table causes access to the table to become serial, which limits your throughput.
Do it outside transaction scope. Generate the id number in a way that won't be hidden from two concurrent transactions. By the way, this is what AUTO_INCREMENT does. Two concurrent sessions will each get a unique id value, regardless of their order of execution or order of commit. But tracking the last generated id per userid requires access to the database, or a duplicate data store. For example, a memcached key per userid, which can be incremented atomically.
It's relatively easy to ensure that inserts get unique values. But it's hard to ensure they will get consecutive ordinal values. Also consider:
What happens if you INSERT in a transaction but then roll back? You've allocated id value 3 in that transaction, and then I allocated value 4, so if you roll back and I commit, now there's a gap.
What happens if an INSERT fails because of other constraints on the table (e.g. another column is NOT NULL)? You could get gaps this way too.
If you ever DELETE a row, do you need to renumber all the following rows for the same userid? What does that do to your memcached entries if you use that solution?
SQL Server should allow you to do this. If you can't implement this using a computed column (probably not - there are some restrictions), surely you can implement it in a trigger.
MySQL also would allow you to implement this via triggers.
In a comment you ask the question about efficiency. Unless you are dealing with extreme volumes, storing an 8 byte DATETIME isn't much of an overhead compared to using, for example, a 4 byte INT.
It also massively simplifies your data inserts, as well as being able to cope with records being deleted without creating 'holes' in your sequence.
If you DO need this, be careful with the field names. If you have uid and id in a table, I'd expect id to be unique in that table, and uid to refer to something else. Perhaps, instead, use the field names property_id and amendment_id.
In terms of implementation, there are generally two options.
1). A trigger
Implementations vary, but the logic remains the same. As you don't specify an RDBMS (other than NOT MS/Oracle) the general logic is simple...
Start a transaction (often this is Implicitly already started inside triggers)
Find the MAX(amendment_id) for the property_id being inserted
Update the newly inserted value with MAX(amendment_id) + 1
Commit the transaction
Things to be aware of are...
- multiple records being inserted at the same time
- records being inserted with amendment_id being already populated
- updates altering existing records
2). A Stored Procedure
If you use a stored procedure to control writes to the table, you gain a lot more control.
Implicitly, you know you're only dealing with one record.
You simply don't provide a parameter for DEFAULT fields.
You know what updates / deletes can and can't happen.
You can implement all the business logic you like without hidden triggers
I personally recommend the Stored Procedure route, but triggers do work.
It is important to get your data types right.
What you are describing is a multi-part key. So use a multi-part key. Don't try to encode everything into a magic integer, you will poison the rest of your code.
If a record is identified by (entity_id,version_number) then embrace that description and use it directly instead of mangling the meaning of your keys. You will have to write queries which constrain the version number but that's OK. Databases are good at this sort of thing.
version_number could be a timestamp, as a_horse_with_no_name suggests. This is quite a good idea. There is no meaningful performance disadvantage to using timestamps instead of plain integers. What you gain is meaning, which is more important.
You could maintain a "latest version" table which contains, for each entity_id, only the record with the most-recent version_number. This will be more work for you, so only do it if you really need the performance.
As a bit of background, I'm working with a SQL Lite database that is being consumed by a closed-source UI that doesn't order the results by the handy timestamp column (gee, thanks Nokia!) - it just uses the default ordering, which corresponds to the primary key, which is a vanilla auto-incrementing 'id' column.
I easily have a map of the current and desired id values, but applying the mapping is my current problem. It seems I cannot swap the values as an update processes rows one at a time, which would temporarily result in a duplicate value. I've tried using an update statement with case clauses using a temporary out-of-sequence value, but as each row is only processed once this obviously doesn't work. Thus I've reached the point of needing 3 update statements to swap a pair of values, which is far from ideal as I want this to scale well.
Compounded to this, there are a number of triggers set up which makes adding/deleting rows into a new table a complex problem unless I can disable those for the duration of any additions/deletions resulting from table duplication & deletion, which is why I haven't pursued that avenue yet.
I'm thinking my next line of enquiry will be a new column with the new ids then finding a way to move the primary index to it before removing the old column, but I'm throwing this out there in case anyone can offer up a better solution that will save me some time :)