Check for changes after update - sql

I want to check if values from a table were changed after the update process and store the sysdate in a column. For updating, I'm using a stored procedure with more than 60 parameters (number of columns from my table). The stored procedure is called every time a user hit the button save in the UI. Does anyone know how can I achieve this?
I mention that I can't use triggers because of the performance policy.
Thank You!

Related

SQL Server : detect column changes

I am trying to detect any changes in a column. Let me describe my problem exactly.
Let's say I have 400 stored procedures and 20 of them changes a column named ModDate in a table Users. But the ModDate column is not enough for me to achieve my goal. I need a dirty bit column let's say IsChanged.
My solution would be find and check the procedures that updates Users.ModDate and change these procedures to update IsChanged as well. But this is not efficient in time. Also, I may miss some procedures and this will cause problems.
So, my question; is it possible to alter a table/column to create an "on change" trigger? When any procedure changes the value of Users.ModDate, SQL Server will update the IsChanged column as 1 automatically.
Thanks for your answers.
You can create a trigger on a column for different crud events.
See https://www.sqlservertutorial.net/sql-server-triggers/sql-server-create-trigger/
`CREATE TRIGGER [schema_name.]trigger_name ON table_name AFTER {[INSERT],[UPDATE],[DELETE]} [NOT FOR REPLICATION] AS {sql_statements`}

Problem importing a MariaDB table with a BEFORE INSERT trigger

I am struggling to find an simple solution to import a table that has a "before insert" trigger. When importing the table, the trigger fires at each row that is imported, wreaking havoc in the table.
That trigger is supposed to create an incremental unique ID each time we create a new row (it is an idea of the type "yy_mm_dd".incremental_integer_for_the_day, and I have found no other way to get MariaDB to create it).
Is there a better way than deleting the trigger in both the exporting and receiving databases, doing the import, and then recreating manually the trigger???
Thanks!
E.
Many thanks!
Yes, it does work if I manually edit the export file, to move the creation triggers after the insertion of the rows, leaving the drop trigger where it is (i.e. before the rows to import). So that is a solution, although it does require some manual edit with my SQL software (Querious) which generates an export script with the triggers at the beginning.
And your suggestion to create two columns, one with the date and the other with the increment may be one to explore. Although since I have to reset the increment each period (day in my example), it would still require another trigger, or some PHP code.
What I wanted to avoid with the trigger was the sql request to retrieve the max(ID), increment it (or in your second solution the max(date) and the max(id in the period), check if first row of day, then reset to 1 or increment) and another request to save the ID value...
There does not seem to be a way to disable the triggers during the import, as there is to disable the foreign key check.
E.

Trigger when a select occurs

i have a customer with a ERP wich i don't have programming access, and wants to do a special trigger when a item is selected. The problem is that the ItemID is nowhere kept when chosen, only when whole sale is kept, and the trigger should happen before that.
This is a novice question for sure, but this value must be kept somewhere right ?
When i do a audit do see what happens when the item is chosen inside the ERP it only does SELECT statments. Can i do a trigger based on a SELECT ?
Thank you.
It is not possible to create a trigger based on execution of a select query in PL/SQL. Trigger can be created only on INSERT, UPDATE or DELETE.
References :
https://community.oracle.com/thread/1556647?tstart=0
http://www.geekinterview.com/question_details/18571

SQL Server how to get last inserted data?

I ran a large query (~30mb) which inserts data in ~20 tables. Accidentally, I selected wrong database. There are only 2 tables with same name but with different columns. Now I want to make sure that no data is inserted in this database, I just don't know how.
If your table has a timestamp you can test for that.
Also sql-server keeps a log of all transactions.
See: https://web.archive.org/web/20080215075500/http://sqlserver2000.databases.aspfaq.com/how-do-i-recover-data-from-sql-server-s-log-files.html
This will show you how to examine the log to see if any inserts happened.
Best option go for Trigger
Use trigger to find the db name and
table name and all the history of
records manipulated

CREATE TRIGGER is taking more than 30 minutes on SQL Server 2005

On our live/production database I'm trying to add a trigger to a table, but have been unsuccessful. I have tried a few times, but it has taken more than 30 minutes for the create trigger statement to complete and I've cancelled it.
The table is one that gets read/written to often by a couple different processes. I have disabled the scheduled jobs that update the table and attempted at times when there is less activity on the table, but I'm not able to stop everything that accesses the table.
I do not believe there is a problem with the create trigger statement itself. The create trigger statement was successful and quick in a test environment, and the trigger works correctly when rows are inserted/updated to the table. Although when I created the trigger on the test database there was no load on the table and it had considerably less rows, which is different than on the live/production database (100 vs. 13,000,000+).
Here is the create trigger statement that I'm trying to run
CREATE TRIGGER [OnItem_Updated]
ON [Item]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF update(State)
BEGIN
/* do some stuff including for each row updated call a stored
procedure that increments a value in table based on the
UserId of the updated row */
END
END
Can there be issues with creating a trigger on a table while rows are being updated or if it has many rows?
In SQLServer triggers are created enabled by default. Is it possible to create the trigger disabled by default?
Any other ideas?
The problem may not be in the table itself, but in the system tables that have to be updated in order to create the trigger. If you're doing any other kind of DDL as part of your normal processes they could be holding it up.
Use sp_who to find out where the block is coming from then investigate from there.
I believe the CREATE Trigger will attempt to put a lock on the entire table.
If you have a lots of activity on that table it might have to wait a long time and you could be creating a deadlock.
For any schema changes you should really get everyone of the database.
That said it is tempting to put in "small" changes with active connections. You should take a look at the locks / connections to see where the lock contention is.
That's odd. An AFTER UPDATE trigger shouldn't need to check existing rows in the table. I suppose it's possible that you aren't able to obtain a lock on the table to add the trigger.
You might try creating a trigger that basically does nothing. If you can't create that, then it's a locking issue. If you can, then you could disable that trigger, add your intended code to the body, and enable it. (I do not believe you can disable a trigger during creation.)
Part of the problem may also be the trigger itself. Could your trigger accidentally be updating all rows of the table? There is a big differnce between 100 rows in a test database and 13,000,000. It is a very bad idea to develop code against such a small set when you have such a large dataset as you can have no way to predict performance. SQL that works fine for 100 records can completely lock up a system with millions for hours. You really want to know that in dev, not when you promote to prod.
Calling a stored proc in a trigger is usually a very bad choice. It also means that you have to loop through records which is an even worse choice in a trigger. Triggers must alawys account for multiple record inserts/updates or deletes. If someone inserts 100,000 rows (not unlikely if you have 13,000,000 records), then looping through a record based stored proc could take hours, lock the entire table and cause all users to want to hunt down the developer and kill (or at least maim) him because they cannot get their work done.
I would not even consider putting this trigger on prod until you test against a record set simliar in size to prod.
My friend Dennis wrote this article that illustrates why testing a small volumn of information when you have a large volumn of information can create difficulties on prd that you didn't notice on dev:
http://blogs.lessthandot.com/index.php/DataMgmt/?blog=3&title=your-testbed-has-to-have-the-same-volume&disp=single&more=1&c=1&tb=1&pb=1#c1210
Run DISABLE TRIGGER triggername ON tablename before altering the trigger, then reenable it with ENABLE TRIGGER triggername ON tablename