SQL bulk copy : Trigger not getting fired - sql

I am copying data from Table1 to Table2 table using sql bulk copy. I have applied trigger on Table2, but my trigger is not firing on every row. Here is my trigger and sqlbulkcopy function.
SqlConnection dstConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Destination"].ConnectionString);
string destination = dstConn.ConnectionString;
//Get data from Source in our case T1
DataTable dataTable = new Utility().GetTableData("Select * From [db_sfp_ems].[dbo].[tbl_current_data_new] where [start_date]>'" + calculate_daily_Time + "' and status=0" , source);
SqlBulkCopy bulkCopy = new SqlBulkCopy(source, SqlBulkCopyOptions.FireTriggers)
{
//Add table name of source
DestinationTableName = "tbl_current_data",
BatchSize = 100000,
BulkCopyTimeout = 360
};
bulkCopy.WriteToServer(dataTable);
//MessageBox.Show("Data Transfer Succesfull.");
dstConn.Close();
------Trigger-----
ALTER TRIGGER [dbo].[trgAfterInsert] ON [dbo].[tbl_current_data]
AFTER INSERT
AS
BEGIN
declare #intime datetime
declare #sdp_id numeric
declare #value numeric(9,2)
SELECT #intime= DATEADD(SECOND, -DATEPART(SECOND, start_date), start_date) FROM INSERTED
SELECT #sdp_id= sdp_id FROM INSERTED
SELECT #value= value FROM INSERTED
INSERT INTO Table3(sdp_id,value,start_date)
VALUES
(
#sdp_id,#value,#intime
)

A trigger is fired after an insert, whether that insert concerns 0, 1 or multiple records makes no difference to the trigger. So, even though you are inserting a whole bunch of records, the trigger is only fired once. This is by design, and not specific for BULK_INSERT; this is true for every kind of insert. This also means that the inserted pseudo table can hold 0, 1 or multiple records. This is a common pitfall. Be sure to write your trigger in such a way it can handle multiple records. For example: SELECT #sdp_id= sdp_id FROM INSERTED won't work as expected if inserted holds multiple records. The variable will be set, but you cannot know what value (from which inserted record) it's going to hold.
This is all part of the set based philosophy of SQL, it is best not to try and break that philosophy by using loops or other RBAR methods. Stay in the set mindset.

Your trigger is simply broken. In SQL Server, triggers handle multiple rows at one time. Assuming that inserted has one row is fatal error -- and I wish it caused a syntax error.
I think this is the code you want:
ALTER TRIGGER [dbo].[trgAfterInsert] ON [dbo].[tbl_current_data]
AFTER INSERT
AS
BEGIN
INSERT INTO Table3 (sdp_id, value, start_date)
SELECT sdp_id, value,
DATEADD(SECOND, -DATEPART(SECOND, start_date), start_date)
FROM inserted i;
END;
Apart from being correct, another advantage is that the code is simpler to write.
Note: You are setting the "seconds" part to 0. However -- depending on the type -- start_date could have fractional seconds that remain. If that is an issue, ask another question.

Related

How to read all "last_changed" records from Firebird DB?

My question is a bit tricky, because it's mostly a logical problem.
I've tried to optimize my app speed by reading everything into memory but only those records, which changed since "last read" = greatest timestamp of records last time loaded.
FirebirdSQL database engine does not allow to update a field in an "After Trigger" directly, so it's obviously using "before update or insert" triggers to update the field new.last_changed = current_timestamp;
The problem:
As it turns out, this is a totally WRONG method, because those triggers fire on transaction start!
So if there is a transaction that takes some more time than an other, the saved "last changed time" will be lower than a short-burst transaction fired and finished in between.
1. tr.: 13:00:01.400 .............................Commit << this record will be skipped !
2. tr.: 13:00.01.500......Commit << reading of data will happen here.
The next read will be >= 13:00.01.500
I've tried:
to rewrite all triggers, so they fire after and call an UPDATE orders SET ... << but this causing circular, self-calling endless events.
Would a SET_CONTEXT lock interfere with multi-row update and nested triggers?
(I do not see any possibility this method would work good if running multiple updates in the same transaction.)
What is the common solution for all this?
Edit1:
What I want to happen is to read only those records from DB actually changed since last read. For that to happen, I need the engine to update records AFTER COMMIT. (Not during it, "in the middle".)
This trigger is NOT good, because it will fire on the moment of change, (not after Commit):
alter trigger SYNC_ORDERS active after insert or update position 999 AS
declare variable N timestamp;
begin
N = cast('NOW' as timestamp);
if (new.last_changed <> :N) then
update ORDERS set last_changed= :N where ID=new.ID;
end
And from the application I do:
Query1.SQL.Text := 'SELECT * FROM orders WHERE last_changed >= ' + DateTimeToStr( latest_record );
Query1.Open;
latest_record := Query1.FieldByName('last_changed').asDateTime;
.. this code will list only the record commited in the 2th transaction (earlier) and never the first, longer running transaction (commited later).
Edit2:
It seems I have the same question as here... , but specially for FirebirdSQL.
There are not really any good solutions there, but gave me an idea:
- What if I create an extra table and log changes earlier than 5 minutes there per table?
- Before each SQL query, first I will ask for any changes in that table, sequenced via ID grow!
- Delete lines older than 23 hours
ID TableID Changed
===========================
1 5 2019.11.27 19:36:21
2 5 2019.11.27 19:31:19
Edit3:
As Arioch already suggested, one solution is to:
create a "logger table" filled on every BEFORE INSERT OR UPDATE
trigger by every table
and update the "last_changed" sequence of it
by the ON TRANSACTION COMMIT trigger
But, would not be ...
a better approach?:
adding 1-1 last_sequence INT64 DEFAULT NULL column to every table
create a global generator LAST_GEN
update every table's every NULL row with a gen_id(LAST_GEN,1) inside the ON TRANSACTION COMMIT trigger
SET to NULL again on every BEFORE INSERT OR UPDATE trigger
So basically switching the last_sequence column of a record to:
NULL > 1 > NULL > 34 ... every time it gets modified.
This way I :
do not have to fill the DB with log data,
and I can query the tables directly with WHERE last_sequence>1;.
No needed to pre-query the "logger table" first.
I'm just afraid: WHAT happens, if the ON TRANSACTION COMMIT trigger is trying to update a last_sequence field, while a 2th transaction's ON BEFORE trigger is locking the record (of an other table)?
Can this happen at all?
The final solution is based on the idea, that:
Each table's BEFORE INSERT OR UPDATE trigger can push a time of the transaction: RDB$SET_CONTEXT('USER_TRANSACTION', 'table31', current_timestamp);
The global ON TRANSACTION COMMIT trigger can insert a sequence + time into a "logging table", if receiving such a context.
It can even take care of "daylight saving changes" and "intervals", by logging only "big time differences", like >=1 minute, to reduce the amount of records.)
A stored procedure can ease and speed up the calculation of 'LAST_QUERY_TIME' of each query's.
Example:
1.)
create trigger ORDERS_BI active before insert or update position 0 AS
BEGIN
IF (NEW.ID IS NULL) THEN
NEW.ID = GEN_ID(GEN_ORDERS,1);
RDB$SET_CONTEXT('USER_TRANSACTION', 'orders_table', current_timestamp);
END
2, 3.)
create trigger TRG_SYNC_AFTER_COMMIT ACTIVE ON transaction commit POSITION 1 as
declare variable N TIMESTAMP;
declare variable T VARCHAR(255);
begin
N = cast('NOW' as timestamp);
T = RDB$GET_CONTEXT('USER_TRANSACTION', 'orders_table');
if (:T is not null) then begin
if (:N < :T) then T = :N; --system time changed eg.: daylight saving" -1 hour
if (datediff(second from :T to :N) > 60 ) then --more than 1min. passed
insert into "SYNC_PAST_TIMES" (ID, TABLE_NUMBER, TRG_START, SYNC_TIME, C_USER)
values (GEN_ID(GEN_SYNC_PAST_TIMES, 1), 31, cast(:T as timestamp), :N, CURRENT_USER);
end;
-- other tables too:
T = RDB$GET_CONTEXT('USER_TRANSACTION', 'details_table');
-- ...
when any do EXIT;
end
Edit1:
It is possible to speed up the readout of the "last-time-changed" value from our SYNC_PAST_TIMES table with a help of a Stored Procedure. Logically, You have to store in memory both the ID PT_ID + the time PT_TM in your program to call it for each table.
CREATE PROCEDURE SP_LAST_MODIF_TIME (
TABLE_NUMBER SM_INT,
LAST_PASTTIME_ID BG_INT,
LAST_PASTTIME TIMESTAMP)
RETURNS (
PT_ID BG_INT,
PT_TM TIMESTAMP)
AS
declare variable TEMP_TIME TIMESTAMP;
declare variable TBL SMALLINT;
begin
PT_TM = :LAST_PASTTIME;
FOR SELECT p.ID, p.SYNC_TIME, p.TABLA FROM SYNC_PAST_TIMES p WHERE (p.ID > :LAST_PASTTIME_ID)
ORDER by p.ID ASC
INTO PT_ID, TEMP_TIME, TBL DO --the PT_ID gets an increasing value immediately
begin
if (:TBL = :TABLE_NUMBER) then
if (:TEMP_TIME< :MI_TIME) then
PT_TM = :TEMP_TIME; --searching for the smallest
end
if (:PT_ID IS NULL) then begin
PT_ID = :LAST_PASTTIME_ID;
PT_TM = :LAST_PASTTIME;
end
suspend;
END
You can use this procedure by including in your select, using the WITH .. AS format:
with UTLS as (select first 1 PT_ID, PT_TM from SP_LAST_MODIF_TIME (55, -- TABLE_NUMBER
0, '1899.12.30 00:00:06.000') ) -- last PT_ID, PT_TM from your APP
select first 1000 u.PT_ID, current_timestamp as NOWWW, r.*
from UTLS u, "Orders" r
where (r.SYNC_TIME >= u.PT_TM);
Using FIRST 1000 is a must to prevent reading the whole table if all values are changed at once.
Upgrading the SQL, adding a new column, etc. makes SYNC_TIME changing to NOW at the same time at all rows of the table.
You may adjust it per table individually, just like the interval of seconds to monitor changes. Add a check to your APP, how to handle the case, if the new data reaches 1000 lines at once ...

how to execute trigger when only one column changes

I have the following trigger, and I need that its only executed when one column value changes, is that possible?
ALTER TRIGGER [dbo].[TR_HISTORICO]
ON [dbo].[Tbl_Contactos]
AFTER UPDATE
AS
BEGIN
IF UPDATE (primerNombre) -- sólo si actualiza PRIMER NOMBRE
BEGIN
INSERT INTO [dbo].[Tbl_Historico] ([fecha],[idUsuario],[valorNuevo], [idContacto],[tipoHistorico] )
SELECT getdate(), 1, [dbo].[Encrypt]([dbo].[Decrypt](primerNombre)), [idContacto], 1
FROM INSERTED
END
END
The problem is the code is executed always even if another column changes
The problem is probably the way you are doing updates in your code. It may be updating every field and not only the one that changed.
In this case you need to check to see if there is a difference between the values in the inserted and deleted pseudo tables. Or fix your code so that it only updates what needs to be updated.
Comparing the value of primerNombre from the inserted and deleted tables
ALTER TRIGGER [dbo].[TR_HISTORICO] ON [dbo].[Tbl_Contactos]
AFTER UPDATE AS
BEGIN
INSERT INTO [dbo].[Tbl_Historico] ([fecha],[idUsuario],[valorNuevo], [idContacto],[tipoHistorico] )
SELECT getdate(), 1, [dbo].[Encrypt]([dbo].[Decrypt](i.primerNombre)), i.[idContacto], 1
FROM INSERTED i
inner join deleted d
on i.idContacto = d.idContacto
where i.primerNombre <> d.primerNombre
END
If primerNombre is nullable, the where will need to handle null comparisons as well.

UPDATE and INSERT should fire trigger only once

Is there any way to combine an update and an insert statements in a way that they fires a trigger only once?
I have one particular table that has (and currently needs) a trigger AFTER INSERT, UPDATE, DELETE. Now I want to update one row and insert another row and have the trigger fire only once for that.
Is this at all possible?
I already tried a MERGE-Statement without success: The trigger fires once for the update- and once for the insert-part.
Well, problem solved for me. I did NOT find a way to combine the statements into one fire-event of the trigger. But the trigger behaves in an interesting way, that was good enough for me: Both calls to the trigger do already have access to the fully updated data.
Just execute the following statements and you will see what I mean.
CREATE TABLE Foo (V INT)
GO
CREATE TRIGGER tFoo ON Foo AFTER INSERT, UPDATE, DELETE
AS
SELECT 'inserted' AS Type, * FROM inserted
UNION ALL
SELECT 'deleted', * FROM deleted
UNION ALL
SELECT 'actual', * FROM Foo
GO
DELETE FROM Foo
INSERT Foo VALUES (1)
;MERGE INTO Foo
USING (SELECT 2 AS V) AS Source ON 1 = 0
WHEN NOT MATCHED BY SOURCE THEN DELETE
WHEN NOT MATCHED BY TARGET THEN INSERT (V) VALUES (Source.V);
As a result, the trigger will be called twice for the MERGE. But both times, "SELECT * FROM Foo" delivers the fully updated data already: There will be one row with the value 2. The value 1 is deleted already.
This really surprised me: The insert-trigger is called first and the deleted row is gone from the data before the call to the delete-trigger happens.
Only the values of "inserted" and "deleted" correspond to the delete- or insert-statement.
You could try something like this:
The trigger would check for the existence of #temp table.
If it doesn't exist, it creates it with dummy data. It then checks if the recent values contain the same user (SPID) that is running now and if the last time it was triggered was within 20 seconds.
If these are true then it will PRINT 'Do Nothing' and drop the table, otherwise it will do your trigger statement.
At the end of your trigger statement it inserts into the table the SPID and current datetime.
This temp table should last as long as the SPID connection, if you want it to last longer make it a ##temp or a real table.
IF OBJECT_ID('tempdb..#temp') IS NULL
begin
Create table #temp(SPID int, dt datetime)
insert into #temp values (0, '2000-01-01')
end
If ##SPID = (select top 1 SPID from #temp order by dt desc)
and Convert(datetime,Convert(varchar(19),GETDATE(),121)) between
Convert(datetime,Convert(varchar(19),(Select top 1 dt from #temp order by dt desc),121)) and
Convert(datetime,Convert(varchar(19),DateAdd(second, 20, (select top 1 dt from #temp order by dt desc)),121))
begin
PRINT 'Do Nothing'
Drop table #temp
end
else
begin
--trigger statement
Insert into #temp values (##SPID, GETDATE())
end

Insert trigger preventing duplicates

I have a table with a AutoIdentity column as its PK and a nvarchar column called "IdentificationCode". All I want is when inserting a new row, it will search the table for any preexisting IdentificationCode, and if any found roll back the transaction.
I have written the folowing trigger:
ALTER trigger [dbo].[Disallow_Duplicate_Ids]
on [dbo].[tbl1]
for insert
as
if ((select COUNT(*) from dbo.tbl1 e , inserted i where e.IdentificationNo = i.IdentificationNo ) > 0)
begin
RAISERROR('Multiple Ids detected',16,1)
ROLLBACK TRANSACTION
end
But when inserting new rows, it always triggers the rollback even if there is no such IdentificationCode.
Can any one help me please?
thanks
As #Qpirate mentions, you should probably put some sort of UNIQUE constraint on the column. This is probably 'stronger' than using a trigger, as there's ways to disable those.
Also, the implicit-join syntax (comma-separated FROM clause) is considered an SQL anti-pattern - if possible, please always explicitly declare your joins.
I suspect that your error is because your trigger seems to be an AFTER trigger, and you check to see if there are any (non-zero) rows in the table; in other words, the trigger is (possibly) 'failing' the INSERT because it was INSERTed. Changing it to a BEFORE (or INSTEAD OF) trigger, or changing the count to >= 2 may solve the problem.
Without seeing your insert statement, it's impossible to know for sure, but (especially if you're using a SP), you may be able to check for existence in the INSERT statement itself, and throw an error (or do something else) if the row isn't inserted.
For example, the following:
INSERT INTO tbl1 (identificationCode, *otherColumns*)
VALUES (#identificationCode, *otherColumns)
WHERE NOT EXISTS (SELECT '1'
FROM tbl1
WHERE identificationCode = #identificationCode)
Will return a code indicating 'row not found' (inserted, etc; on pretty much every system this is SQLCODE = 100) if identificationCode is already present.
Use EXISTS to check if the IdentificationCode already exist.
If EXISTS (Select * from tbl1 where IdentificationCode = #IdentificationCode )
BEGIN
//do something
END
Else
BEGIN
//do something
END

Possible to implement a manual increment with just simple SQL INSERT?

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;