I want to set a date/time stamp on deleted records but I'm getting strange results. When this trigger fires it sets the date to 1906!
I've tried using the GETDATE() function in the query with an UPDATE also on the table properties with a default value. It works fine for records I input manually into the DELETED_RECORDS table but any record that's moved in caused by an actual deletion has a 1906 date. I'm not sure what's going on
ALTER TRIGGER [dbo].[Backup_TT_Deleted_Records]
ON [dbo].[tblLLS_TT]
FOR DELETE
AS
BEGIN
INSERT INTO dbo.tblLLS_TT_DELETED_RECORDS
SELECT *
FROM Deleted
UPDATE dbo.tblLLS_TT_DELETED_RECORDS
SET Deleted_DTTM = GETDATE()
WHERE dbo.tblLLS_TT_DELETED_RECORDS.Deleted_DTTM IS NULL
END
Also with the above trigger if I enter a record on the DELETED_RECORDS table leaving the Delete_DTTM null then delete a record causing the trigger to fire it will fill in that record correctly, but the record(s) deleted coming into the DELETED_RECORDS table will again have a date of 1906. I also noted the date increments one day per each record.
Any idea what might be going on here?
Any other test you want me to perform let me know and I'll post the results. This one is strange. Thanks!
Your insert statement is as follows
INSERT INTO dbo.tblLLS_TT_DELETED_RECORDS
SELECT *
FROM Deleted
You are neither using an explicit column list for the INSERT nor in the SELECT.
You say
I also noted the date increments one day per each record.
My guess is that the columns in the source aren't being inserted into the columns in the destination that you think they are.
It you are deleting a batch of rows with an IDENTITY column value between 2191 - 2555 and inserting that int value into a datetime column you will get an implicit cast to a date in 1906 (the integer is treated as representing number of days since 1st Jan 1900)
I'm assuming that the source column is likely IDENTITY as that would explain the ascending nature of the dates. To resolve this try specifying both column lists explicitly. Also it is important to note that the equivalent source and destination columns must be in the same ordinal positions. SQL Server won't match them up on the basis of column name so for example.
INSERT INTO dbo.tblLLS_TT_DELETED_RECORDS
(col1,
col2,
col3)
SELECT col1,
col2,
col3
FROM Deleted
Also you don't need the separate UPDATE statement either you can just do this in the INSERT
INSERT INTO dbo.tblLLS_TT_DELETED_RECORDS
(col1,
col2,
col3,
Deleted_DTTM)
SELECT col1,
col2,
col3,
getdate()
FROM Deleted
If you execute, what value do you get?
SELECT GETDATE();
Perhaps is misconfigured server time. You could also make the default value of the column "Deleted_DTTM" takes its default system date, then you can skip the last update statement.
Related
I have a SQL Server table with just 3 columns, one of which is of type varbinary. The data in this column is actually a Json document which among other properties contains information about when the data was last modified. Unfortunately the SQL table itself does not contain information about when its rows were modified.
Now when doing sorting and filtering of the data I of course don't want fetch all rows in order to find e.g. the latest 100 entries.
So my question is: does SQL Server somehow remember when a row was added/modified? I have tried adding a timestamp and this is applied to all existing rows but this is applied randomly I think, because the sorting doesn't work. I don't need a datetime or anything, I just want to be able sort the records based on when they were last modified.
Thanks
For those looking to insert a tamestamp column of type DateTime into an existing DB table, you can do this like so:
ALTER TABLE TestTable
ADD DateInserted DATETIME NOT NULL DEFAULT (GETDATE());
The existing records will automatically get the value equal to the date/time of the moment when column is added.
New records will get up-to-date value upon insertion.
SQL Server will not track historically when a row was inserted or modified so you need to rely on the JSON data to figure that out yourself. You are going to need a new column to make this efficient to query. Once you have your new column you have some options:
Loop through all your records populating the new column with the relevant value from the JSON data.
If your version of SQL Server is recent enough, you can query the JSON data directly. Populate this column using a query like this:
UPDATE MyTable
SET MyNewColumn = JSON_VALUE(JsonDataColumn, '$.Customer.DateCreated')
The downside of this method is that you need to maintain this
Make SQL Server compute the value from the JSON automatically, for example:
ALTER TABLE MyTable
ADD MyNewColumn AS JSON_VALUE(JsonDataColumn, '$.Customer.DateCreated')
And, create an index to make it efficient:
CREATE INDEX IX_MyTable_MyNewColumn
ON MyTable(MyNewColumn)
Use a new column CreatedDate and store datetime every time you make an Insert.
You could use GetDate() for inserting date in the column.
A UpdatedDate column can be used for updates.
in order to find e.g. the latest 100 entries.
Timestamp is indeed what you need.
It's ever-increasing value, it's updated automatically, so you are always able to find all last modified/inserted rows.
Here is an example:
create table dbo.test1 (id int);
insert into dbo.test1 values(1), (2), (3);
alter table dbo.test1 add ts timestamp;
update dbo.test1
set id = 10
where id = 2
select top 1 *
from dbo.test1
order by ts desc;
--id ts
--10 0x000000001FCFABD2
insert into dbo.test1 (id)
values (100);
select top 1 *
from dbo.test1
order by ts desc;
--id ts
--100 0x000000001FCFABD3
As you see, you always get the last modified/inserted row.
For your purpose just use
select top 100 *
...
order by ts desc;
Thanks. Apparently I didn't look hard enough before I posted this question. The question has been asked a couple of times before and the answer is: Nope! There is no easy solution to this.
SQL Server does not keep track of when a record was created or modified, which was somehow what I was looking for. So I will go for the next best solution, which is probably to create a datetime column, retrieve the modified date from the Json document and then update the record. Or rather, the 1,4 million records:-(
I want to delete new record if the same record created before.
My columns are date, time and MsgLog. If date and time are same, I want to delete new one.
I need help .
You can check in the table whether that value exists or not in the column using a query. If it exists, you can show message that a record already exists.
To prevent such kind of erroneous additions you can add restriction to your table to ensure unique #Date #Time pairs; if you don't want to change data structure (e.g. you want to add records with such restrictions once or twice) you can exploit insert select counstruction.
-- MS SQL version, check your DBMS
insert into MyTable(
Date,
Time,
MsgLog)
select #Date,
#Time,
#MsgLog
where not exists(
select 1
from MyTable
where (#Date = Date) and
(#Time = Time)
)
P.S. want to delete new one equals to do not insert new one
You should create a unique constraint in the DB level to avoid invalid data no matter who writes to your DB.
It's always important to have your schema well defined. That way you're safe that no matter how many apps are using your DB or even in case someone just writes some inserts manually.
I don't know which DB are you using but in MySQL can use to following DDL
alter table MY_TABLE add unique index(date, time);
And in Oracle you can :
alter table MY_TABLE ADD CONSTRAINT constaint_name UNIQUE (date, time);
That said, you can also (not instead of) do some checks before inserting new values to avoid dealing with exceptions or to improve performance by avoiding making unnecessary access to your DB (length \ nulls for example could easily be dealt with in the application level).
You can avoid deleting by checking for duplicate while inserting.
Just modify your insert procedure like this, so no duplicates will entered.
declare #intCount as int;
select #intCount =count(MsgLog) where (date=#date) and (time =#time )
if #intCount=0
begin
'insert procedure
end
> Edited
since what you wanted is you need to delete the duplicate entries after your bulk insert. Think about this logic,
create a temporary table
Insert LogId,date,time from your table to the temp table order by date,time
now declare four variables, #preTime,#PreDate,#CurrTime,#CurrDate
Loop for each items in temp table, like this
while
begin
#pkLogID= ' Get LogID for the current row
select #currTime=time,#currDate=date from tblTemp where pkLogId=#pkLogID 'Assign Current values
'Delete condition check
if (#currDate=#preDate) and (#currTime=#preTime)
begin
delete from MAINTABLE where pkLogId=#pkLogID
end
select #preDate=#currDate,#preTime=#currTime 'Assign current values as preValues for next entries
end
The above strategy is we sorted all entries according to date and time, so duplicates will come closer, and we started to compare each entry with its previous, when match found we deleting the duplicate entry.
i have one table test it has 10 column with 20 rows.
I need to move this data to archive_test table which has 11 column (10 same as test table plus one column is archive date).
when i tried to insert like below its shows error because number of column mismatch.
insert into archive_test
select * from test;
Please suggest the better way to do this.Thanks!
Well, obviously you need to supply values for all the columns, and although you can avoid doing so you should also explicitly state whic value is going to be inserted into which column. If you have an extra column in the target table you either:
Do not mention it
Specify a default value as part of its column definition in the table
Have a trigger to populate it
Specify a value for that column.
eg.
insert into table archive_test (col1, col2, col3 ... col11)
select col1,
col2,
col3,
...
sysdate
from test;
assuming that archive_date is the last column:
INSERT INTO archive_test
SELECT test.*, sysdate
FROM test
We have a status table. When the status changes we currently delete the old record and insert a new.
We are wondering if it would be faster to do a select to check if it exists followed by an insert or update.
Although similar to the following question, it is not the same, since we are changing individual records and the other question was doing a total table refresh.
DELETE, INSERT vs UPDATE || INSERT
Since you're talking SQL Server 2008, have you considered MERGE? It's a single statement that allows you to do an update or insert:
create table T1 (
ID int not null,
Val1 varchar(10) not null
)
go
insert into T1 (ID,Val1)
select 1,'abc'
go
merge into T1
using (select 1 as ID,'def' as Val1) upd on T1.ID = upd.ID --<-- These identify the row you want to update/insert and the new value you want to set. They could be #parameters
when matched then update set Val1 = upd.Val1
when not matched then insert (ID,Val1) values (upd.ID,upd.Val1);
What about INSERT ... ON DUPLICATE KEY? First doing a select to check if a record exists and checking in your program the result of that creates a race condition. That might not be important in your case if there is only a single instance of the program however.
INSERT INTO users (username, email) VALUES ('Jo', 'jo#email.com')
ON DUPLICATE KEY UPDATE email = 'jo#email.com'
You can use ##ROWCOUNT and perform UPDATE. If it was 0 rows affected - then perform INSERT after, nothing otherwise.
Your suggestion would mean always two instructions for each status change. The usual way is to do an UPDATE and then check if the operation changed any rows (Most databases have a variable like ROWCOUNT which should be greater than 0 if something changed). If it didn't, do an INSERT.
Search for UPSERT for find patterns for your specific DBMS
Personally, I think the UPDATE method is the best. Instead of doing a SELECT first to check if a record already exists, you can first attempt an UPDATE but if no rows are affected (using ##ROWCOUNT) you can do an INSERT.
The reason for this is that sooner or later you might want to track status changes, and the best way to do this would be to keep an audit trail of all changes using a trigger on the status table.
I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them.
ie)
Form1- Numbered 2000000-2999999
Form2- Numbered 3000000-3999999
dbo.test2 - is my form information table
Tsel - is my autoinc table for the 3000000 series numbers
Tadv - is my autoinc table for the 2000000 series numbers
What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms.
Although it does work, I'm concerned that the numbers will get messed up under load.
I'm not sure the ##IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above.
See code below.
**** TRIGGER ****
CREATE TRIGGER MAKEANID2 ON dbo.test2
AFTER INSERT
AS
SET NOCOUNT ON
declare #someid int
declare #someid2 int
declare #startfrom int
declare #test1 varchar(10)
select #someid=##IDENTITY
select #test1 = (Select name1 from test2 where sysid = #someid )
if #test1 = 'select'
begin
insert into Tsel Default values
select #someid2 = ##IDENTITY
end
if #test1 = 'adv'
begin
insert into Tadv Default values
select #someid2 = ##IDENTITY
end
update test2
set name2=(#someid2) where sysid = #someid
SET NOCOUNT OFF
The best way to keep the two IDs in sync is to create a persisted Computed Column based on the actual identity column. Where Col1 is the identity column and Col2 is the persisted computed column that is the result of some formula based on Col1. You can then even Create Indexes on Computed Columns.
test this out:
CREATE TABLE YourTable
(Col1 int not null identity(2000000,1)
,Col2 AS (Col1-2000000+3000000) PERSISTED
,Col3 varchar(5)
)
GO
insert into YourTable (col3) values ('a')
insert into YourTable (col3) SELECT 'b' UNION SELECT 'c'
SELECT * FROM YourTable
OUTPUT:
Col1 Col2 Col3
----------- ----------- -----
2000000 3000000 a
2000001 3000001 b
2000002 3000002 c
(3 row(s) affected)
EDIT After OPs comments, I'm still not 100% sure what you are after.
I never used SQL Server 2000 (we skipped that version), and I don't really want to look up how to do everything in that version, it is so limited without the OUTPUT clause and ROW_NUMBER(), CTEs, etc.
I can think of three methods to do:
1) You could just create a sequence table, where you have 2 rows one for A and one for B, each time you need to insert one, look up, increment, and save the value of the type of seq you need and then insert with that value. for example if you are inserting a type "A" row, do this:
INSERT INTO test2
(col1, col2, col3,...)
SELECT
ISNULL(MAX(NextSeq),0)+1, col2, col3,...
FROM YourSequenceTable WITH (UPDLOCK, HOLDLOCK)
WHERE SequenceType='A'
UPDATE YourSequenceTable
SET NextSeq=ISNULL(NextSeq,0)+1
WHERE SequenceType='A'
2) change your table structure to just save the data in Tsel or Tadv and have a trigger insert into a third common table table where you can have your additional "common" identity. common table would be like
CommonTable
ID int not null indentity(1,1) primary key
TselID int null FK to Tsel.PK
TadvID int null FK to Tadv.PK
3) if you need a single table, try this, which is a real hack. Change your Tsel and Tadv tables to contain all the necessary columns and from the application INSERT INTO Tsel when the value is select and have a trigger grab that identity value and then INSERT that into test2, then remove the data from tsel. Then, from the application when the value is adv just INSERT INTO Tadv an have a trigger on that table insert the data into test2, and remove the data from Tadv. You need to have all data columns in Tsel and Tadv so the trigger can copy the values to test2, but the trigger will remove the rows from there (the identity will be sequential even if the original rows are removed).
your Tsel trigger would look like:
CREATE Trigger MAKEANID2_Tsel ON dbo.Tsel
AFTER INSERT
AS
--copy data from Tsel into test2., test2 can still have its own identity value
INSERT INTO test2
(PK, col1, col2, col3,...)
SELECT
col0, col1, col2, col3,....
FROM INSERTED
--remove rows from Tsel, which were just copied and not needed anymore.
DELETE Tsel
WHERE PK IN (SELECT PK FROM INSERTED)
GO
YOu are right to worry about ##identity, it is not a recommended peice of code, if somone else adds a differnet trigger that inserets an identity and that one triggers first, that is the value you will get.
But you have much bigger problems. Your trigger is deisgned to work on only one record ata time. This is a very very very bad thing to do with a trigger. Triggers operate on sets of data and must ALWAYS even if you think therer will never be more than one record inserted ata time) be set up to handle sets of data not one record. Further, you don;t need to ask for the identity, you have the identities of all records inserted inteh batch in a psuedotable availlble in triggers called inserted.
Now reading one of your comments, you say you can't have any missing values at all. Inthat case you cannot under any circustance use an identity column as it will have gaps if any transaction is rolled back. You will have to write your own process to create the numbers based onteh last number and look out for race conditions.