difference before and after trigger in oracle - sql

Can somebody explain difference between "before" and "after" trigger in oracle 10g with an example ?

First, I'll start my answer by defining trigger: a trigger is an stored procedure that is run when a row is added, modified or deleted.
Triggers can run BEFORE the action is taken or AFTER the action is taken.
BEFORE triggers are usually used when validation needs to take place before accepting the change. They run before any change is made to the database. Let's say you run a database for a bank. You have a table accounts and a table transactions. If a user makes a withdrawal from his account, you would want to make sure that the user has enough credits in his account for his withdrawal. The BEFORE trigger will allow to do that and prevent the row from being inserted in transactions if the balance in accounts is not enough.
AFTER triggers are usually used when information needs to be updated in a separate table due to a change. They run after changes have been made to the database (not necessarily committed). Let's go back to our back example. After a successful transaction, you would want balance to be updated in the accounts table. An AFTER trigger will allow you to do exactly that.

I'm not completely sure what you're interested in knowing, so I'll keep this fundamental.
Before Triggers
As per the name, these triggers are fired prior to creating the row in the table. Subsequently, since the row has not yet been created you have full access to the :new.table_element field. This allows for data cleansing and uniformity if unwanted/malformed data is attempting to be inserted/updated. This is just a basic example, but you need to utilize the before trigger any time you may require access to the ":new" data.
After Triggers
Since the after trigger fires once the row has already been created, these triggers are typically utilized when you want logic to occur due to the row. For example, if you have an address table and a user updates his/her address, then you may want to update the address reference ids in an xref table upon creation (if you happen to be retaining all old addresses as well). Also, unlike the before trigger, you do not have access to modify any of the column values since the row already exists in the table.

BEFORE TRIGGER are used when the trigger action should determine whether or not the triggering statements should be allowed to complete .by using BEFORE TRIGGERS user can eliminate unnecessary processing of the triggering statement
but,AFTER TRIGGERS are used when the triggering statements should completed before executing the trigger action.

Related

SQL Server - get last updated time for table

I need to know when the data in a table was last modified (data inserted, created, deleted). The common answer is to extract the data from sys.dm_db_index_usage_stats, but I don't have access to that table and as I understand it, access to that is a server-level permission, and since this is on a corporate shared server, the odds of the IT department granting me access is up there with pigs flying.
Is there a way to get this information, that does not require greatly elevated privileges?
Update #1
Some additional information: What I am doing, is caching the contents of this table locally, and need to know when to update my local cache. This means:
I don't need an actual timestamp, just any value that I can compare against a locally-cached value that tells me "things have changed, update the cache"
If this value changes too often (e.g. gets reset every time they restart the server which is extremely rarely) it's OK, it just means I do an extra cache update that I didn't actually need to do
Update #2
I should have done this early on, but I just assumed I would be able to create tables as needed ... but I'm not. Permission not granted. So neither of the proposed methods will work :-(
Here's my new thought : I can call
select checksum_agg(binary_checksum(*))
and get a checksum on the entire remote table. Then, if the checksum changes, I know the table has changed and I can update my local cache. I've seen the problems with checksum, but using HASHBYTES sounds like it would be much more complicated and much slower.
This is working, the problem is that when the checksum changes, I have to reload the entire cache. My table has so many rows that returning the checksum per row takes an unacceptably long time, is there a way to use the "OVER" clause and get maybe 10 checksums, the first checksum for the first tenth of the rows, etc.?
If you can modify their schema, then you should be able to add a trigger.
Once you add a lastmodified column, you can use this trigger to get the time updated any time the record changes:
CREATE trigger [dbo].[TR_TheirTable_Timestamp]
on [dbo].[TheirTable] for update
as
begin
update
dbo.TheirTable
set
lastmodified=getdate()
from
Inserted
where
dbo.TheirTable.UniqueID=Inserted.UniqueKey
end
The reason I do it only for update, and not insert is because I can see a new record, and I don't need a timestamp, but a modified record I can compare the time I last updated the record. if you want an insert update, then
on [dbo].[TheirTable] for insert,update
would work
If you just wanted to know when the table was updated, then the trigger could write to another table with the tablename and date, and you wouldn't have to modify their schema

SQL - When was my table last change?

I want to find when the last INSERT, UPDATE or DELETE statement was performed on a table (for now, in the future I want to do this in multiple tables) in an Oracle database.
I created a table and then I updated one of its rows. Now I've the following query:
SELECT SCN_TO_TIMESTAMP(ora_rowscn) from test_table;
This query returns the timestamps of each row, and for each of them it gives the time when they were first created.
But the row that I've updated have the same timestamp as the others. Why? Shouldn't the timestamp be updated?
ORA_ROWSCN is not the right solution for this. It is not necessarily reliable at the row level. Moreover, it's not going to be useful at all for deleted rows.
If you have a real need to know when DML changes were made to a table, you should look at Oracle's auditing feature.
An alternative is to use triggers to record when changes are made to the table. Since you say you only care about the time of the most recent change, you can just create a single-column table to record the time, and write a trigger that fires on any DML statement to maintain it. If you're doing this in a production environment or even just in one where more than one session might be modifying the table, you'd want to think about how it should work when concurrent changes are made. You could force the table to have at most one row, but that would serialize every change to the table. You could allow each session to insert a separate row and take the max value when querying it, but then you probably want to think about clearing out old rows from time to time.

basic trigger lock issue

I have one question regarding trigger.
The scenario is like this
Create Procedure
begin
Insert into XYZ (a) values (b)
end
Now i have placed the trigger on INSERT - AFTER on table XYZ.
In that trigger there is business logic which takes 2-3 seconds to execute it, business logic is performed against other database table not on the XYZ table
So what i need to confirm here that once INSERT is been done, then the table XYZ will be ready to do insert for another records or it will be locked until the trigger is completed?
EDIT
I have done some more research on this issue and explain it below
In the INSERT - TRIGGER, i have put the my business logic and also below line
WAITFOR DELAY '00:01'
Now when i try to execute the above SP, the SP was not completed for 1 minues (as i have specified the delay of 1 minute in the trigger) and table XYZ was also locked during this period.
So this brings me to the conclusion that trigger does LOCKS the table even if you are not using the same table in the trigger. Am i right? Does anyone has different opinion here?
The question and answer linked to by #Hallainzil show one approach:
Wrap all table INSERTs and UPDATEs into Stored Procedures
The SP can then complete the additional Business Logic without maintaining a lock
There is also another approach which is slightly messier in several ways, but also more flexible in many ways:
Keep a record of which fields have been INSERTED or UPDATED
Have an agent job fire repeatedly or overnight to process those changes
You may use a trigger to keep that record. Maybe with a LastModifiedTime field, or a hasBeenProcessed field, or even a separate tracking table. It can be done in many ways, and is relatively light weight to maintain (none of the business logic happens yet).
This releases your table from any locks as quickly as possible. It also means that you are able to deal with logins that have the ability to write directly to your table, circumventing your Stored Procedures.
The down side is that your INSERTS/UPDATES and your business logic are being processed Asynchronously. Your other SQL code may need to check whether or not the business logic ha sbeen completed yet, rather than just assuming that both the INSERT and the Business Logic always happen atomically.
So, yes, there are ways of avoiding this locking. But you introduce additional constraints and/or complexity to your model. This is by no means a bad thing, but it needs to be considered within your overall design.

SQL Server 2008 Express locking

OK so I have read a fair amount about SQL Server's locking stuff, but I'm struggling to understand it all.
What I want to achieve is thus:
I need to be able to lock a row when user A SELECTs it
If user B then tries to SELECT it, my winforms .net app needs to set all the controls on the relevant form to be disabled, so the user can't try and update. Also it would be nice if I could throw up a messagebox for user B, stating that user A is the person that is using that row.
So basically User B needs to be able to SELECT the data, but when they do so, they should also get a) whether the record is locked and b) who has it locked.
I know people are gonna say I should just let SQL Server deal with the locking, but I need User B to know that the record is in use as soon as they SELECT it, rather than finding out when they UPDATE - by which time they may have entered data into the form, giving me inconsistency.
Also any locks need to allow SELECTs to still happen - so when user B does his SELECT, rather than just being thrown an exception and receiving no/incomplete data, he should still get the data, and be able to view it, but just not be able to update it.
I'm guessing this is pretty basic stuff, but there's so much terminology involved with SQL Server's locking that I'm not familiar with that it makes reading about it pretty difficult at the moment.
Thanks
To create this type of 'application lock', you may want to use a table called Locks and insert key, userid, and table names into it.
When your select comes along, join into the Locks table and use the presence of this value to indicate the record is locked.
I would also recommend adding a 'RowVersion' column to your table you wish to protect. This field will assist in identifying if you are updating or querying a row that has changed since you last selected it.
This isn't really what SQL Server locking is for - ideally you should only be keeping a transaction (and therefore a lock) open for the absolute minimum needed to complete an atomic operation against that database - you certainly shouldn't be holding locks while waiting for user input.
You would be better served keeping track of these sorts of locks yourself by (for example) adding a locked bit column to the table in question along with a locked_by varchar column to keep track of who has the row locked.
The first user should UPDATE the row to indicate that the row is locked and who has it locked:
UPDATE MyTable
SET `locked` = 1
AND `locked_by` = #me
WHERE `locked` = 0
The locked = 0 check is there to protect against potential race conditions and make sure that you don't update a record that someone else has already locked.
This first user then does a SELECT to return the data and ensure that they did really manage to lock the row.

Postgresql Concurrency

In a project that I'm working, there's a table with a "on update" trigger, that monitors if a boolean column has changed (ex.: false -> true = do some action). But this action can only be done once for a row.
There will be multiple clients accessing the database, so I can suppose that eventually, multiple clients will try to update the same row column in parallel.
Does the "update" trigger itself handle the concurrency itself, or I need to do it in a transaction and manually lock the table?
Triggers don't handle concurrency, and PostgreSQL should do the right thing whether or not you use explicit transactions.
PostgreSQL uses optimistic locking which means the first person to actually update the row gets a lock on that row. If a second person tries to update the row, their update statement waits to see if the first commits their change or rolls back.
If the first person commits, the second person gets an error, rather than their change going through and obliterating a change that might have been interesting to them.
If the first person rolls back, the second person's update un-blocks, and goes through normally, because now it's not going to overwrite anything.
The second person can also use the NOWAIT option, which makes the error happen immediately instead of blocking, if their update conflicts with an unresolved change.