SqlBulkInsert - How to set Fire Triggers, Check Constraints? - sql-server-2005

I'm performing a bulk insert with an ADO.NET 2.0 SqlBulkCopy object from a C# method into a MS SQL 2005 database, using a database user with limited permissions. When I try to run the operation, I get the error message:
Bulk copy failed. User does not have
ALTER TABLE permission on table
'theTable'. ALTER
TABLE permission is required on the
target table of a bulk copy operation
if the table has triggers or check
constraints, but 'FIRE_TRIGGERS' or
'CHECK_CONSTRAINTS' bulk hints are not
specified as options to the bulk copy
command.
I read some documentation and created the bulk copy object with the constructor that lets me specify such things:
SqlBulkCopy bc = new SqlBulkCopy(
System.Configuration.ConfigurationSettings.AppSettings["ConnectionString"],
SqlBulkCopyOptions.FireTriggers & SqlBulkCopyOptions.CheckConstraints);
But this doesn't change anything - I get the same error message as before. I tried fiddling with some of the other SqlBulkCopyOptions values but no luck. I really thought this would fix the problem, am I missing something?
I tested the procedure after granting ALTER on the table to my user, and the operation succeeded. However this is not an option for my situation.

Solved it! Looks like I need a refresher on flags enums. I was bitwise ANDing the enum values when I should have been ORing them.
SqlBulkCopyOptions.FireTriggers & SqlBulkCopyOptions.CheckConstraints
evaluates to zero (which is equivalent to SqlBulkCopyOptions.Default.)
SqlBulkCopyOptions.FireTriggers | SqlBulkCopyOptions.CheckConstraints
Worked correctly and allowed the bulk insert to complete.

Possibilities only, I'm sorry
SQL documentation for BULK INSERT specifies 3 cases where ALTER TABLE is needed. You listed 2 of them. Is the KeepIdentity option being set, even if not needed?
Another option is that the trigger on the table is disabled already, confusing the issue. Use ALTER TABLE dbo.SomeTable ENABLE TRIGGER ALL to ensure enabled.

Related

Getting error when trying to Rename multiple tables in SPROC in DB2

I've created a DB2 sql script that populates a static table and then does a rename to swap out the live table with the newly updated one. Its a fairly large SQL script so I'm only including the areas that Im having a an error on.
I'm getting the error: "[IBM][CLI Driver][DB2/NT64] SQL0104N An unexpected token "RENAME" was found following "D_HOLIDAY_LOG_OLD; ". Expected tokens may include: "TRUNCATE". LINE NUMBER=382. SQLSTATE=42601".
I suspect, its a syntax issue with the RENAME commands. If I need to add the whole query, I can. Thanks in advance
CREATE OR REPLACE PROCEDURE NSD_HOLIDAY_LOG_SPROC()
LANGUAGE SQL
SPECIFIC SP_NSD_HOLIDAY_LOG_SPROC
DYNAMIC RESULT SETS 1
BEGIN
COMMIT;
TRUNCATE TABLE TMWIN.NSD_HOLIDAY_LOG immediate;
DROP TABLE NSD_HOLIDAY_LOG_OLD;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG TO NSD_HOLIDAY_LOG_LIVE;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_OLD TO NSD_HOLIDAY_LOG;
END#
This is frequently asked.
As you are using static SQL in an SQL PL stored procedure, you must follow the documented rules for blocks of Compound SQL (Compiled) statements.
On of those rules is that static SQL has a restricted set of statements that can appear in such a block of code.
For example, with current versions of Db2-LUW, you cannot use any of the following statically (including rename table) :
ALTER , CONNECT,CREATE, DESCRIBE, DISCONNECT, DROP, FLUSH EVENT MONITOR, FREE LOCATOR, GRANT, REFRESH TABLE, RELEASE (connection only), RENAME TABLE, RENAME TABLESPACE, REVOKE, SET CONNECTION, SET INTEGRITY, SET PASSTHRU, SET SERVER OPTION ,TRANSFER OWNERSHIP
Other Db2 platforms (Z/OS, i-series) might have different restrictions but the same principle.
To achieve what you need you can use dynamic SQL instead of Static-SQL (as long as you understand the implications).
In other words, instead of writing:
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD;
you could instead use:
execute immediate('RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD' );
or equivalent.
You can also use two statements, one to PREPARE and the other to EXECUTE , whichever suits the design. Refer to the documentation for execute immediate.
The same is true for other statements that your version of Db2 disallows in static compound-SQL (compiled) blocks (for example, DROP, or CREATE etc.).

postgresql: \copy method enter valid entries and discard exceptions

When entering the following command:
\copy mmcompany from '<path>/mmcompany.txt' delimiter ',' csv;
I get the following error:
ERROR: duplicate key value violates unique constraint "mmcompany_phonenumber_key"
I understand why it's happening, but how do I execute the command in a way that valid entries will be inserted and ones that create an error will be discarded?
The reason PostgreSQL doesn't do this is related to how it implements constraints and validation. When a constraint fails it causes a transaction abort. The transaction is in an unclean state and cannot be resumed.
It is possible to create a new subtransaction for each row but this is very slow and defeats the purpose of using COPY in the first place, so it isn't supported by PostgreSQL in COPY at this time. You can do it yourself in PL/PgSQL with a BEGIN ... EXCEPTION block inside a LOOP over a select from the data copied into a temporary table. This works fairly well but can be slow.
It's better, if possible, to use SQL to check the constraints before doing any insert that violates them. That way you can just:
CREATE TEMPORARY TABLE stagingtable(...);
\copy stagingtable FROM 'somefile.csv'
INSERT INTO realtable
SELECT * FROM stagingtable
WHERE check_constraints_here;
Do keep concurrency issues in mind though. If you're trying to do a merge/upsert via COPY you must LOCK TABLE realtable; at the start of your transaction or you will still have the potential for errors. It looks like that's what you're trying to do - a copy if not exists. If so, skipping errors is absolutely the wrong approach. See:
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
Insert, on duplicate update in PostgreSQL?
Postgresql - Clean way to insert records if they don't exist, update if they do
Can COPY be used with a function?
Postgresql csv importation that skips rows
... this is a much-discussed issue.
One way to handle the constraint violations is to define triggers on the target table to handle the errors. This is not ideal as there can still be race conditions (if concurrently loading), and triggers have pretty high overhead.
Another method: COPY into a staging table and load the data into the target table using SQL with some handling to skip existing entries.
Additionally, another useful method is to use pgloader

Should I use the template from MS SQL Management Studio to create new triggers?

If you create a new trigger in MS SQL Management Studio by using the GUI, it gives you this template:
--====================================
-- Create database trigger template
--====================================
USE <database_name, sysname, AdventureWorks>
GO
IF EXISTS(
SELECT *
FROM sys.triggers
WHERE name = N'<trigger_name, sysname, table_alter_drop_safety>'
AND parent_class_desc = N'DATABASE'
)
DROP TRIGGER <trigger_name, sysname, table_alter_drop_safety> ON DATABASE
GO
CREATE TRIGGER <trigger_name, sysname, table_alter_drop_safety> ON DATABASE
FOR <data_definition_statements, , DROP_TABLE, ALTER_TABLE>
AS
IF IS_MEMBER ('db_owner') = 0
BEGIN
PRINT 'You must ask your DBA to drop or alter tables!'
ROLLBACK TRANSACTION
END
GO
Should I use this template?
I dont know anything about triggers, but I think I need to use them. The purpose in this case is that on an insert to the table, I need to update one of the fields.
Please help me get started!
OK to begin with that is the wrong template if you want an ordinary trigger that one is a trigger on making structural changes to the table itself.
If you decide to do a trigger that affects data (as opposed to structure), there are several things you need to know. First and by far the most critical, triggers operate on sets of data not one row at time. You must write any trigger to handle multiple row inserts.updates or deletes. If you end up with any code setting the value in inserted or deleted to a variable, there is a 99% chance it will not work properly if multiple records are involved.
What is inserted or deleted you ask? That is the next thing you need to know about triggers, there are two pseudotables (inserted and deleted) that are only available in a trigger (or an output clause) which contain the new information being inserted or the updated values (in the inserted table) and the old information being deleted or being changed by an update (in the deleted table). So an insert has values in inserted, a delete has values in deleted and an update has values in both. Use these in your trigger to pull the values you need to change.
Since you don't know anything about triggers, I would say no, don't use the template.
Read the books online page for Create Trigger and write the trigger by hand.
There is probably more in that template code than you actually need. Read the manual and keep it simple.
If you don't know anything about triggers then I would strongly suggest that you read up on them before implementing them. Get Triggers right and they can make your life a lot easier; get it wrong and Triggers will cause you a lot of trouble.
I would suggest starting off with this tutorial
http://www.sqlteam.com/article/an-introduction-to-triggers-part-i
You can use the above SQL as a template or you can simply write your own. I would suggest you write your own as you'll understand what you are doing. Obviously only do this after you have done some serious reading on triggers. Check out MSDN too

Watch a Database column to determine what is modifying

How do I find out what application or SP is modifing the values in a config table? I thought I had isolated the app that was responsible but these particular values keep chnging back to true when I keep modifying them to be false.
First, create a logging table:
CREATE TABLE modlog(
datestamp smalldatetime,
username varchar(255) NOT NULL DEFAULT SYSTEM_USER
);
Then create an UPDATE trigger on your table:
CREATE TRIGGER mytable_mods ON mytable FOR UPDATE AS
INSERT INTO modlog(smalldatetime) VALUES (GETDATE());
Just peek into the modlog table to figure out which user is updating the table, and when. You could get fancy and also log particular fields being updated.
Another approach would be to set up a trace in SQL Server Profiler, filter the heck out of it so it only returns updates on that table, and keep it open until something happens.
If your applications include the ApplicationName parameter in their connection strings, you can use App_Name() instead of SYSTEM_USER, which will log the application name, removing the extra detective work. Knowing the user might still be useful so you can figure out what they are doing to trigger the update.
Create a trigger to roll back the update. Wait for the app to error out. It can be a very simple trigger:
CREATE TRIGGER BugOffRogueProgram
ON MyConfigTable
FOR UPDATE
AS
BEGIN
ROLLBACK TRAN
END
The answers provided so far are absolutely on the spot - that's the way to do it in SQL Server 2005.
Just as a brief teaser: in SQL Server 2008, there's a new feature called Change Data Capture to support this exact scenario "out of the box" without the need to write triggers and update tables yourself. Quite handy!
Marc

MSSQL: Disable triggers for one INSERT

This question is very similar to SQL Server 2005: T-SQL to temporarily disable a trigger
However I do not want to disable all triggers and not even for a batch of commands, but just for one single INSERT.
I have to deal with a shop system where the original author put some application logic into a trigger (bad idea!). That application logic works fine as long as you don't try to insert data in another way than the original "administration frontend". My job is to write an "import from staging system" tool, so I have all data ready. When I try to insert it, the trigger overwrites the existing Product Code (not the IDENTITY numeric ID!) with a generated one. To generate the Code it uses the autogenerated ID of an insert to another table, so that I can't even work with the ##IDENTITY to find my just inserted column and UPDATE the inserted row with the actual Product Code.
Any way that I can go to avoid extremly awkward code (INSERT some random characters into the product name and then try to find the row with the random characters to update it).
So: Is there a way to disable triggers (even just one) for just one INSERT?
You may find this helpful:
Disabling a Trigger for a Specific SQL Statement or Session
But there is another problem that you may face as well.
If I understand the situation you are in correctly, your system by default inserts product code automatically(by generating the value).
Now you need to insert a product that was created by some staging system, and for that product its product code was created by the staging system and you want to insert it to the live system manually.
If you really have to do it you need to make sure that the codes generated by you live application in the future are not going to conflict with the code that you inserted manually - I assume they musty be unique.
Other approach is to allow the system to generate the new code and overwrite any corresponding data if needed.
You can disable triggers on a table using:
ALTER TABLE MyTable DISABLE TRIGGER ALL
But that would do it for all sessions, not just your current connection.. which is obviously a very bad thing to do :-)
The best way would be to alter the trigger itself so it makes the decision if it needs to run, whether that be with an "insert type" flag on the table or some other means if you are already storing a type of some sort.
Rather than disabling triggers can you not change the behaviour of the trigger. Add a new nullable column to the table in question called "insertedFromImport".
In the trigger change the code so that the offending bit of the trigger only runs on rows where "insertedFromImport" is null. When you insert your records set "insertedFromImport" to something non-null.
Disable the trigger, insert, commit.
SET IDENTITY_INSERT Test ON
GO
BEGIN TRAN
DISABLE TRIGGER trg_Test ON Test
INSERT INTO Test (MyId, MyField)
VALUES (999, 'foo')
ENABLE TRIGGER trg_Test ON Test
COMMIT TRAN
SET IDENTITY_INSERT Test OFF
GO
Can you check for SUSER_SNAME() and only run when in context of the administration frontend?
I see many things that could create a problem. First change the trigger to consider multiple record imports. That may probably fix your problem. DO not turn off the trigger as it is turned off for everyone not just you. If you must then put the database into single user user mode before you do it and do your task during off hours.
Next, do not under any circumstances ever use ##identity to get the value just inserted! USe scope_identity instead. ##identity will return the wrong value if there are triggers onthe table that also do inserts to other tables with identity fields. If you are using ##identity right now through your system (since we know your system has triggers), your abosolute first priority must be to immediately find and change all instances of ##identity in your code. You can have serious data integrity issues if you do not. This is a "stop all work until this is fixed" kind of problem.
As far as getting the information you just inserted back, consider creating a batchid as part of you insert and then adding a column called batchid (which is nullable so it won't affect other inserts)to the table. Then you can call back what you inserted by batchid.
If you insert using BULK INSERT, you can disable triggers just for the insert.
I'm pretty sure bulk insert will require a data file on the file system to import so you can't just use T-SQL.
To use BULK INSERT you need INSERT and ADMINISTRATOR BULK OPERATION permissions.
If you disable triggers or constraints, you'll also need ALTER TABLE permission.
If you are using windows authentication, your windows user will need read access from the file. if using Mixed Mode authentication, the SQl Server Service account needs read access from the file.
When importing using BULK IMPORT, triggers are disabled by default.
More information: http://msdn.microsoft.com/en-us/library/ms188365.aspx