In the code below, is the use of GO, transactions and semi-colons correct?
I appreciate these questions have been asked separately many times but I am struggling when using them in combination and would be thankful for any guidance.
I am unsure whether it is necessary to use transactions in this circumstance.
USE TestingDB
GO
DECLARE #CustomerContactID int = 278800
BEGIN TRANSACTION
DELETE
FROM dbo.CustomerContact
WHERE CustomerContact_CustomerContactID = #CustomerContactID;
DELETE
FROM dbo.CustomerContactComs
WHERE CustomerContactComs_CustomerContactID = CustomerContactID;
DELETE
FROM dbo.CustomerContactAddress
WHERE CustomerContactAddress_CustomerContactID = #CustomerContactID;
COMMIT TRANSACTION;
The semicolons are obsolete not necessary in T-SQL (except for Common Table Expressions and Service Broker statements if those are not the first statements in a batch). So it's a matter of taste if you want to use them. But Microsoft recommends to always use them. (See the first two comments below)
The order of your DELETE statements seems wrong. You might at first want to delete the detail data from CustomerContactComs and CustomerContactAddress and then the CustomerContact
The transaction might be necessary if you want to avoid situations where you only delete a part of the information, e.g. only CustomerContactComs but not the rest.
That leads directly to the GO. You should not insert any GO statements between the statements in the transaction. GO is not part of T-SQL but is used for tools such as Management Studio to indicate separate batches.
So if there is a GO the previous statements are send to the server as one batch. If one statement in that batch raises an error, the remaining statements of that batch will not be executed.
But if you insert a GO here, the following statements would be a new batch and so the transaction could be committed although a previous statement failed, which is probably not what you want.
Related
Good day,
Two questions:
A) If I have something like this:
COMPLEX QUERY
WAIT FOR LOG TO FREE UP (DELAY)
COMPLEX QUERY
Would this actually work? Or would the log segment of tempdb remain just as full, due to still holding on to the log of the first query.
B) In the situation above, is it possible to have the middle query perform a dump tran with truncate_only ?
(It's a very long chain of various queries that are run together. They don't change anything in the databases and I don't care to even keep the logs if I don't have to.)
The reason for the chain is because I need the same two temp tables, and a whole bunch of variables, for various queries in the chain (Some of them for all of the queries). To simply the usage of the query chain by a user with VERY limited SQL knowledge, I collect very simple information at the beginning of the long script, retrieve the rest automatically, and then use it through out the script
I doubt either of these would work, but I thought I may as well ask.
Sybase versions 15.7 and 12 (12.? I don't remember)
Thanks,
Ziv.
Per my understanding of #michael-gardner 's answers this is what I plan:
FIRST TEMP TABLES CREATION
MODIFYING OPERATIONS ON FIRST TABLES
COMMIT
QUERY1: CREATE TEMP TABLE OF THIS QUERY
QUERY1: MODIFYING OPERATIONS ON TABLE
QUERY1: SELECT
COMMIT
(REPEAT)
DROP FIRST TABLES (end of script)
I read that 'select into' is not written to the log, so I'm creating the table with a create (I have to do it this way due to other reasons), and use select into existing table for initial population. (temp tables)
Once done with the table, I drop it, then 'commit'.
At various points in the chain I check the log segment of tempdb, if it's <70% (normally at >98%), I use a goto to reach the end of the script where I drop the last temp tables and the script ends. (So no need for a manual 'commit' here)
I misunderstood the whole "on commit preserve rows" thing, that's solely on IQ, and I'm on ASE.
Dumping the log mid-transaction won't have any affect on the amount of log space. The Sybase log marker will only move if there is a commit (or rollback), AND if there isn't an older open transaction (which can be found in syslogshold)
There are a couple of different ways you can approach solving the issue:
Add log space to tempdb.
This would require no changes to your code, and is not very difficult. It's even possible that tempdb is not properly sized for the sytem, and the extra log space would be useful to other applications utilizing tempdb.
Rework your script to add a commit at the beginning, and query only for the later transactions.
This would accomplish a couple of things. The commit at the beginning would move the log marker forward, which would allow the log dump to reclaim space. Then since the rest of your queries are only reads, there shouldn't be any transaction space associate with them. Remember the transaction log only stores information on Insert/Update/Delete, not Reads.
Int the example you listed above, the users details could be stored and committed to the database, then the rest of the queries would just be select statements using those details for the variables, then a final transaction would cleanup the table. In this scenario the log is only held for the first transaction, and the last transaction..but the queries in the middle would not fill the log.
Without knowing more about the DB configuration or query details it's hard to get much more detailed.
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
We have an SSIS package that ran in production on a SQL 2008 box with a 2005 compatibility setting. The package contains a SQL Task and it appears as though the SQL at the end of the script did not run.
The person who worked on that package noted before leaving the company that the package needed "GOs" between the individual SQL commands to correct the issue. however, when testing in development on SQL Server 2008 with 2008 compatibility, the package worked fine.
From what I know, GO's place commands in batches, where commands are sent to the database provider in a batch, for efficiency's sake. I am thinking that the only way that GO should effect the outcome is if there was an error in that script somewhere above it. I can imagine GO in that case, and only that case, effecting the outcome. However, we have seen no evidence of any errors logged.
Can someone suggest to me whether or not GO is even likely related to the problem? Assuming no error was encountered, my understanding of the "GO" command suggests that it use or lack of use is most likely unrelated to the problem.
The GO keyword is, as you say, a batch separator that is used by the SQL Server management tools. It's important to note, though, that the keyword itself is parsed by the client, not the server.
Depending on the version of SQL Server in question, some things do need to be placed into distinct batches, such as creating and using a database. There are also some operations that must take place at the beginning of a batch (like the use statement), so using these keywords means that you'll have to break the script up into batches.
A couple of things to keep in mind about breaking a script up into multiple batches:
When an error is encountered within a batch, execution of that batch stops. However, if your script has multiple batches, an error in one batch will only stop that batch from executing; subsequent batches will still execute
Variables declared within a batch are available to that batch only; they cannot be used in other batches
If the script is performing nothing but CRUD operations, then there's no need to break it up into multiple batches unless any of the above behavioral differences is desired.
All of your assumptions are correct.
One thing that I've experienced is that if you have a batch of statements that is a pre-requisite for another batch, you may need to separate them with a GO. One example may be if you add a column to a table and then update that column (I think...). But if it's just a series of DML queries, then the absence or presence of GO shouldn't matter.
I've noticed that if you set up any variables in the script their state (and maybe the variables themselves) are wiped after a 'GO' statement so they can't be reused. This was certainly the case on SQL Server 2000 and I presume it will be the case on 2005 and 2008 as well.
Yes, GO can affect outcome.
GO between statements will allow execution to continue if there is an error in between. For example, compare the output of these two scripts:
SELECT * FROM table_does_not_exist;
SELECT * FROM sys.objects;
...
SELECT * FROM table_does_not_exist;
GO
SELECT * FROM sys.objects;
As others identified, you may need to issue GO if you need changes applied before you work on them (e.g. a new column) but you can't persist local or table variables across GO...
Finally, note that GO is not a T-SQL keyword, it is a batch separator. This is why you can't put GO in the middle of a stored procedure, for example ... SQL Server itself has no idea what GO means.
EDIT however one answer stated that transactions cannot span batches, which I disagree with:
CREATE TABLE #foo(id INT);
GO
BEGIN TRANSACTION;
GO
INSERT #foo(id) SELECT 1;
GO
SELECT ##TRANCOUNT; -- 1
GO
COMMIT TRANSACTION;
GO
DROP TABLE #foo;
GO
SELECT ##TRANCOUNT; -- 0
I have an application in which I have to insert in a database, with SQL Server 2008, in groups of N tuples and all the tuples have to be inserted to be a success insert, my question is how I insert these tuples and in the case that someone of this fail, I do a rollback to eliminate all the tuples than was inserted correctly.
Thanks
On SQL Server you might consider doing a bulk insert.
From .NET, you can use SQLBulkCopy.
Table-valued parameters (TVPs) are a second route. In your insert statement, use WITH (TABLOCK) on the target table for minimal logging. eg:
INSERT Table1 WITH (TABLOCK) (Col1, Col2....)
SELECT Col1, Col1, .... FROM #tvp
Wrap it in a stored procedure that exposes #tvp as parameter, add some transaction handling, and call this procedure from your app.
You might even try passing the data as XML if it has a nested structure, and shredding it to tables on the database side.
You should look into transactions. This is a good intro article that discusses rolling back and such.
If you are inserting the data directly from the program, it seems like what you need are transactions. You can start a transaction directly in a stored procedure or from a data adapter written in whatever language you are using (for instance, in C# you might be using ADO.NET).
Once all the data has been inserted, you can commit the transaction or do a rollback if there was an error.
See Scott Mitchell's "Managing Transactions in SQL Server Stored Procedures for some details on creating, committing, and rolling back transactions.
For MySQL, look into LOAD DATA INFILE which lets you insert from a disk file.
Also see the general MySQL discussion on Speed of INSERT Statements.
For a more detailed answer please provide some hints as to the software stack you are using, and perhaps some source code.
You have two competing interests, doing a large transaction (which will have poor performance, high risk of failure), or doing a rapid import (which is best not to do all in one transaction).
If you are adding rows to a table, then don't run in a transaction. You should be able to identify which rows are new and delete them should you not like how the look on the first round.
If the transaction is complicated (each row affects dozens of tables, etc) then run them in transactions in small batches.
If you absolutely have to run a huge data import in one transaction, consider doing it when the database is in single user mode and consider using the checkpoint keyword.
I want that when I execute a query for example DELETE FROM Contact, and an error is raised during the transaction it should delete the rows that are able to be deleted raising all the relevant errors for the rows that cannot be deleted.
For SQL Server you are not going to break the atomicity of the Delete command within a single statement - even issued outside of an explicit transaction, you are going to be acting within an implicit one - e.g. all or nothing as you have seen.
Within the realms of an explicit transaction an error will by default roll back the entire transaction, but this can be altered to just try and rollback the single statement that errored within the overall transaction (of multiple statements) the setting for this is SET XACT_ABORT.
Since your delete is a single statement, the XACT_ABORT can not help you - the line will error and the delete will be rolled back.
If you know the error condition you are going to face (such as a FK constraint violation, then you could ensure you delete has a suitable where clause to not attempt to delete rows that you know will generate an error.
If you're using MySQL you can take advantage of the DELETE IGNORE syntax.
This is a feature which will depend entirely on which flavour of database you are using. Some will have it and some won't.
For instance, Oracle offers us the ability to log DML errors in bulk. The example in the documentation uses an INSERT statement but the same principle applies to any DML statement.