I've been asked to implement some code that will update a row in a MS SQL Server database and then use a stored proc to insert the update in a history table. We can't add a stored proc to do this since we don't control the database. I know in stored procs you can do the update and then call execute on another stored proc. Can I set it up to do this in code using one SQL command?
Either run them both in the same statement (separate the separate commands by a semi-colon) or a use a transaction so you can rollback the first statement if the 2nd fails.
You don't really need a stored proc for this. The question really boils down to whether or not you have control over all the inserts. If in fact you have access to all the inserts, you can simply wrap an insert into datatable, and a insert into historytable in a single transasction. This will ensure that both are completed for 'success' to occur. However, when accessing to tables in sequence within a transaction you need to make sure you don't lock historytable then datatable, or else you could have a deadlock situation.
However, if you do not have control over the inserts, you can add a trigger to certain db systems that will give you access to the data that are modified, inserted or deleted. It may or may not give you all the data you need, like who did the insert, update or delete, but it will tell you what changed.
You can also create sql triggers.
Insufficient information -- what SQL server? Why have a history table?
Triggers will do this sort of thing. MySQL's binlog might be more useful for you.
You say you don't control the database. Do you control the code that accesses it? Add logging there and keep it out of the SQL server entirely.
Thanks all for your reply, below is a synopsis of what I ended up doing. Now to test to see if the trans actually roll back in event of a fail.
sSQL = "BEGIN TRANSACTION;" & _
" Update table set col1 = #col1, col2 = #col2" & _
" where col3 = #col3 and " & _
" EXECUTE addcontacthistoryentry #parm1, #parm2, #parm3, #parm4, #parm5, #parm6; " & _
"COMMIT TRANSACTION;"
Depending on your library, you can usually just put both queries in one Command String, separated by a semi-colon.
Related
I've successfully created many connections in an Excel file to a Database so the issue is directly related only to this particular scenario. Server is 2012, Excel is 2013.
I have the following SP:
IF OBJECT_ID('usp_LockUnlockCustomer', 'P') IS NOT NULL
DROP PROCEDURE usp_LockUnlockCustomer
GO
CREATE PROCEDURE usp_LockUnlockCustomer
#Lock AS CHAR(10), #Id AS nvarchar(50), #LockedBy AS nvarchar(50)=null
AS BEGIN
SET NOCOUNT ON;
IF #Lock = 'Lock'
INSERT INTO IO_Call_DB..IdLock (Id, LockedBy, LockedDtTm)
VALUES(#Id,#LockedBy,GETDATE())
;
IF #Lock = 'Unlock'
DELETE FROM IO_Call_DB..IdLock
WHERE Id = #Id
;
END;
--EXEC usp_LockUnlockCustomer 'Lock','123456789', 'Test User'
The above SP is called via some VBA as follows:
With ActiveWorkbook.Connections("usp_LockUnlockCustomer").OLEDBConnection
.CommandText = "EXECUTE dbo.usp_LockUnlockCustomer '" & bLock & "','" & Id & "','" & LockedBy & "'"
End With
I have tested the string and the string is formatted correct & contains all required data.
The connection between Excel and SQL is created via "Data > From Other Sources > From SQL Server", it's a fairly standard process which has worked for all other SP's and general queries.
I think, because this connection is not returning data to Excel (I only set up the connection rather than specifying that Excel should return data to a cell) that this may be the issue.
Has anyone experienced this issue before?
EDIT1: I have resolved the issue but it's not a particularly great outcome. Some help would be appreciated.
To resolve the issue, you have to include a "select * from" process at the end of the stored procedure and also tell Excel to output the data to a range within the workbook. This allows the .Refresh portion of the VBA to do whatever it does & submit the SP to SQL.
Essentially, you're being forced to create a data table - but I don't want any data, I just want to submit a command.
So, how do you submit a command and not have Excel require that you 1) explicitly state where the data should be put 2) include a SELECT statement within the stored procedure when I don't require any data to be returned.
My fix was to "select top 0" from the table, at least that way the data table being output to Excel won't grow.
In my experience if you generate the database connection in VBA, (there are multiple previous questions about that), rather than rely on an existing workbook connection, your stored procedure will execute regardless of what it returns.
The problem I have is that by merely creating the connection without specifying a cell to return data to, nothing happens.
In addition, if I specify a cell to return data to, nothing happens unless I use my 'fix' which is to create an empty table at the end of the SP.
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
I have an application in which I have to insert in a database, with SQL Server 2008, in groups of N tuples and all the tuples have to be inserted to be a success insert, my question is how I insert these tuples and in the case that someone of this fail, I do a rollback to eliminate all the tuples than was inserted correctly.
Thanks
On SQL Server you might consider doing a bulk insert.
From .NET, you can use SQLBulkCopy.
Table-valued parameters (TVPs) are a second route. In your insert statement, use WITH (TABLOCK) on the target table for minimal logging. eg:
INSERT Table1 WITH (TABLOCK) (Col1, Col2....)
SELECT Col1, Col1, .... FROM #tvp
Wrap it in a stored procedure that exposes #tvp as parameter, add some transaction handling, and call this procedure from your app.
You might even try passing the data as XML if it has a nested structure, and shredding it to tables on the database side.
You should look into transactions. This is a good intro article that discusses rolling back and such.
If you are inserting the data directly from the program, it seems like what you need are transactions. You can start a transaction directly in a stored procedure or from a data adapter written in whatever language you are using (for instance, in C# you might be using ADO.NET).
Once all the data has been inserted, you can commit the transaction or do a rollback if there was an error.
See Scott Mitchell's "Managing Transactions in SQL Server Stored Procedures for some details on creating, committing, and rolling back transactions.
For MySQL, look into LOAD DATA INFILE which lets you insert from a disk file.
Also see the general MySQL discussion on Speed of INSERT Statements.
For a more detailed answer please provide some hints as to the software stack you are using, and perhaps some source code.
You have two competing interests, doing a large transaction (which will have poor performance, high risk of failure), or doing a rapid import (which is best not to do all in one transaction).
If you are adding rows to a table, then don't run in a transaction. You should be able to identify which rows are new and delete them should you not like how the look on the first round.
If the transaction is complicated (each row affects dozens of tables, etc) then run them in transactions in small batches.
If you absolutely have to run a huge data import in one transaction, consider doing it when the database is in single user mode and consider using the checkpoint keyword.
I have this code in a trigger.
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set
email = #i_email,
where user_id = (select user_id from server2.database2.dbo.Table1 where login = #login)
end
I would like to update a table on another db server, both are MSSQL. the query above works for me but it is taking over 10 seconds to complete. table2 has over 200k records. When I run the execution plan it says that the remote scan has a 99% cost.
Any help would be appreciated.
First, the obvious. Check the indexes on the linked server. If I saw this problem without the linked server issue, that would be the first thing I would check.
Suggestion:
Instead of embedding the UPDATE in the server 1 trigger, create a stored procedure on the linked server and update the records by calling the stored procedure.
Try to remove the sub-query from the UPDATE:
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set email = #i_email
from server2.database2.dbo.Table2 t2
inner join
server2.database2.dbo.Table1 t1
on (t1.user_id = t2.user_id)
where t1.login = #login
end
Whoa, bad trigger! Never and I mean never, never write a trigger assuming only one record will be inserted/updated or deleted. You SHOULD NOT use variables this way in a trigger. Triggers operate on batches of data, if you assume one record, you will create integrity problems with your database.
What you need to do is join to the inserted table rather than using a varaible for the value.
Also really updating to a remote server may not be such a dandy idea in a trigger. If the remote server goes down then you can't insert anything to the orginal table. If the data can be somewaht less than real time, the normal technique is to have the trigger go to a table on the same server and then a job pick up the new info every 5-10 minutes. That way if the remote server is down, the records can still be inserted and they are stored until the job can pick them up and send them to the remote server.
When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go