SQL Server - determining statement starting time from within trigger - sql

I'm trying to find a good solution for one task problem can't find a good one yet. So my question is - is it possible to get 'statement starting time' inside the trigger context. Basically, the starting time of update (insert, delete) which caused trigger to fire?
I've tried a few data management views, like sys.dm_tran_active_transactions, sys.dm_exec_requests and couple of others.
I can get the starting time of the full SQL batch, or starting time of the transaction from the views I mentioned (using current ##spid), but can't find starting time of 'trigger firing statement'
Do you know if it's even possible in SQL Server?

You can use getdate() functiuon in your trigger like:
select getdate()
from inserted i

Related

What will happen if I execute rollback statement on simple select query

I executed a simple SQL performance query to retrieve the sessions running currently in the database in Oracle sql developer.
But accidentally my cursor clicked on the roll back icon and it got rolled back.
Could you please tell me What happens to the entire database after this?
Rolling back a select does nothing as long as you didn't make changes to the database without committing the transaction.
In oracle clients when you run a query that modifies the database, the query is first run. Another step is then needed to commit the transaction.
MS SQL Server has a similar concept where you can do safe transactions like
begin tran
delete from table where val > 5
rollback{commit}
This allows you to look at the number of records your SQL statement have done prior to committing the transaction. If you want you can choose to rollback or commit your transaction.
What exactly did you roll back? That "simple SQL performance query", or "sessions running currently in the database in Oracle SQL Developer"?
If former, nothing happened, SELECT didn't do any changes anyway. If it were SELECT ... FOR UPDATE, lock would have been released.
If latter, then any changes you did in the database since previous COMMIT were rolled back (as you didn't roll back to a savepoint), so - nothing happened either.
What happens to the entire database after this?
It returned to state it was in earlier, as if you didn't touch anything at all.

Getting newest not handled, updated rows using txid

I have a table in my PostgreSQL (actually its mulitple tables, but for the sake of simplicity lets assume its just one) and multiple clients that periodically need to query the table to find changed items. These are updated or inserted items (deleted items are handled by first marking them for deletion and then actually deleting them after a grace period).
Now the obvious solution would be to just keep a “modified” timestamp column for each row, remember it for each each client and then simply fetch the changed ones
SELECT * FROM the_table WHERE modified > saved_modified_timestamp;
The modified column would then be kept up to date using triggers like:
CREATE FUNCTION update_timestamp()
RETURNS trigger
LANGUAGE ‘plpgsql’
AS $$
BEGIN
NEW.modified = NOW();
RETURN NEW;
END;
$$;
CREATE TRIGGER update_timestamp_update
BEFORE UPDATE ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_timestamp();
CREATE TRIGGER update_timestamp_insert
BEFORE INSERT ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_timestamp();
The obvious problem here is that NOW() is the time the transation started. So it might happen that a transaction is not yet commited while fetching the updated rows and when its commited, the timestamp is lower than the saved_modified_timestamp, so the update is never registered.
I think I found a solution that would work and I wanted to see if you can find any flaws with this approach.
The basic idea is to use xmin (or rather txid_current()) instead of the timestamp and then when fetching the changes, wrap them in an explicit transaction with REPEATABLE READ and read txid_snapshot() (or rather the three values it contains txid_snapshot_xmin(), txid_snapshot_xmax(), txid_snapshot_xip()) from the transaction.
If I read the postgres documentation correctly, then all changes made transactions that are < txid_snapshot_xmax() and not in txid_snapshot_xip() should be returned in that fetch transaction. This information should then be all that is required to get all the update rows when fetching again. The select would then look like this, with xmin_version replacing the modified column:
SELECT * FROM the_table WHERE
xmin_version >= last_fetch_txid_snapshot_xmax OR xmin_version IN last_fetch_txid_snapshot_xip;
The triggers would then be simply like this:
CREATE FUNCTION update_xmin_version()
RETURNS trigger
LANGUAGE ‘plpgsql’
AS $$
BEGIN
NEW.xmin_version = txid_current();
RETURN NEW;
END;
$$;
CREATE TRIGGER update_timestamp_update
BEFORE UPDATE ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_xmin_version();
CREATE TRIGGER update_timestamp_update_insert
BEFORE INSERT ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_xmin_version();
Would this work? Or am I missing something?
Thank you for the clarification about the 64-bit return from txid_current() and how the epoch rolls over. I am sorry I confused that epoch counter with the time epoch.
I cannot see any flaw in your approach but would verify through experimentation that having multiple client sessions concurrently in repeatable read transactions taking the txid_snapshot_xip() snapshot does not cause any problems.
I would not use this method in practice because I assume that the client code will need to solve for handling duplicate reads of the same change (insert/update/delete) as well as periodic reconciliations between the database contents and the client's working set to handle drift due to communication failures or client crashes. Once that code is written, then using now() in the client tracking table, clock_timestamp() in the triggers, and a grace interval overlap when the client pulls changesets would work for use cases I have encountered.
If requirements called for stronger real-time integrity than that, then I would recommend a distributed commit strategy.
Ok, so I've tested it in depth now and haven't found any flaws so far. I had around 30 clients writing and reading to the database at the same time and all of them got consistent updates. So I guess this approach works.

Atomicity of a job execution in SQL Server

I would like to find the proper documentation to confirm my thought about a SQL Server job I recently wrote. My fear is that data could be inconsistent for few milliseconds (timing between the start of the job execution and its end).
Let's say the job is setup to run every 30 minutes. It will only have one step with the following SQL statement:
DELETE FROM myTable
INSERT INTO myTable
SELECT *
FROM myTableTemp
Could it happens that a SELECT query would be executed exactly in between the DELETE statement and the INSERT statement and thus returning empty results?
And what if I would have created 2 steps in my job, one for the DELETE query and another for the INSERT INTO? Is the atomicity is protected by SQL Server between several steps of one job?
Thanks for your help on this one
No there is no automatic atomic handling of jobs, whether they are multiple statements or steps.
Use this:
begin transaction
delete...
insert....
... anything else you need to be atomic
commit work

Dynamically update "Status" column after "X" amount of time

I'm rather new to SQL Server, but I am working on an app where a record is added to a table, and is given a DateTime stamp.
I want to be able to dynamically update the Status column of this row, 1 hour after the row was added.
Is this possible without running some server side script or store procedure every couple minutes? Is there an efficient way to accomplish this?
In Sql Server you can have Time Dependant or Action Dependent code execution.
Time Dependent
Time Dependant Code execution is handled via SQL Server Agent Jobs. You can execute a stored procedure or ad-hoc T-SQL code on a certain time of the day. It can be scheduled to execute on regular basis.
Action Dependent
Action Dependent Code execution is handled via Triggers (After/Instead of Triggers). A piece of code that is executed in response to a DML action INSERT, UPDATE or DELETE.
Solution
In your case you are trying to execute code in response to an action (Insert) after a certain period of time. I dont think there is an efficient way of doing it I would rather do the following....
You can have a Column called Created of Datetime datatype in your table and set a default value of GETDATE().
Now you dont need the status column. All you need is a query/View which will check at runtime if the row was added more than an hour ago and will return it STATUS as required.
Something like.....
CREATE VIEW dbo.vw_Current_Status
AS
SELECT *
, CASE WHEN DATEDIFF(MINUTE, Created, GETDATE()) >= 60
THEN 'OLD'
ELSE 'New' END AS [Status]
FROM TABLE_NAME

SQL Server Profiler 2005: How to measure execution time of insert statement with trigger?

I want to measure the execution time (using I guess duration from SQL Server Profiler) of an insert statement that has an instead-of insert trigger on it. How do I measure the complete time of this statement including the trigger time?
The execution time (duration) that you see in SQL server profiler for a query is the time it took to execute that query including evaluating any triggers or other constraints.
Because triggers are intended to be used as an alternative way to check data integrity, an SQL statement is not considered to have completed until any triggers have also finished.
Update: An overview of some commonly used SQL Server profiler events:
SQL:BatchCompleted Occurs when a SQL server batch (a group of statements) has completed execution - the duration is the total time to execute the batch.
SQL:StmtCompleted Occurs when a SQL statement executed as part of a batch completes execution - again the duration is the time to execute that single statement.
SP:Completed Occurs when a stored procedure has completed execution - the duration shown is the time to complete execution of the stored procedure.
SP:StmtCompleted Occurs when an SQL statement executed as part of a stored procedure completes.
A batch is a set of SQL statements separated by a GO statement, however to understand the above you should also know that all SQL server commands are executed in the context of a batch*.
Also, each of the above events also has a corresponding Starting event - SP:Starting, SQL:BatchStarting, SQL:StmtStarting and SP:StmtCompleted. These don't list durations (as we don't know the duration yet because its not completed, however do help show when the duration recording starts from).
To better understand the relationship between these events, I recommend that you experiment with capturing some traces of some simple examples (from within SQL Server Management Studio), for example:
SELECT * FROM SomeTable
GO
SELECT * FROM SomeTable
SELECT * FROM OtherTable
GO
SELECT * FROM SomeTable
exec SomeProc
GO
As you should see, for each of the 3 examples above you always get a SQL:BatchStarting and SQL:BatchCompleted, the other event types however provide more detail on the individual commands run.
For this reason I generally tend to use the SQL:BatchCompleted event the most, however if the statement you are attempting to measure is executed as part of a larger batch (or in a stored procedure) then you may find one of the other event classes helpful.
See TSQL Event Category (MSDN) for more information on the various SQL Server Profiling events - there are lots!
Finally, if you are executing this command from within SQL Server Management studio, be aware that the simplest way to record the execution time is to use the client side statistics feature:
(*) I'm pretty sure that everything is executed as part of a batch, although I've not managed to find any evidence on the internet to confirm this.