Atomicity of a job execution in SQL Server - sql

I would like to find the proper documentation to confirm my thought about a SQL Server job I recently wrote. My fear is that data could be inconsistent for few milliseconds (timing between the start of the job execution and its end).
Let's say the job is setup to run every 30 minutes. It will only have one step with the following SQL statement:
DELETE FROM myTable
INSERT INTO myTable
SELECT *
FROM myTableTemp
Could it happens that a SELECT query would be executed exactly in between the DELETE statement and the INSERT statement and thus returning empty results?
And what if I would have created 2 steps in my job, one for the DELETE query and another for the INSERT INTO? Is the atomicity is protected by SQL Server between several steps of one job?
Thanks for your help on this one

No there is no automatic atomic handling of jobs, whether they are multiple statements or steps.
Use this:
begin transaction
delete...
insert....
... anything else you need to be atomic
commit work

Related

SQL INSERT - how to execute a list of queries automatically

I've never done this, so apologies if I'm being quite quite vague.
Scenario
I need to run a long series of INSERT SQL queries. This data is inserted in a table for being processed by a web service's client, i.e. the data is uploaded on a different server and the table gets cleared as the process progresses.
What I've tried
I have tried to add a delay to each Insert statement like so
WAITFOR DELAY '00:30:00'
INSERT INTO TargetTable (TableName1, Id, Type) SELECT 'tablename1', ID1 , 1 FROM tablename1
WAITFOR DELAY '00:30:00'
INSERT INTO TargetTable (TableName2, Id, Type) SELECT 'tablename2', ID2 , 1 FROM tablename2
But this has the disadvantage of assuming that a query will finish executing in 30 minutes, which may not be the case.
Question
I have run the queries manually in the past, but that's excruciatingly tedious. So I would like to write a program that does that for me.
The program should:
Run each query in the order given
Wait to run the next query until the previous one has been processed, i.e. until the target table is clear.
I'm thinking of a script that I can copy into the command prompt console, SQL itself or whatever and run.
How do I go about this? Windows service application? Powershell function?
I would appreciate any pointers to get me started.
You need to schedule job in SQL Server
http://www.c-sharpcorner.com/UploadFile/raj1979/create-and-schedule-a-job-in-sql-server-2008/

How to test your query first before running it sql server

I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)

SQL Server query dry run

I run a lot of queries that perform INSERT's, insert SELECT's, UPDATE's and ALTER's on tables, and when developing these queries, the intermediate steps that are run to test that various parts of the query work, potentially change the table or the data within the table.
Is it possible to do a dry run of a query and have SQL Management Studio give you what the results would be, without actually modifying the data or the table structure?
At the moment I have to back up the database, and run the query. If it works, good, if it doesn't, I have to restore the database (which can take around a hour) and I'm trying to avoid wasting all this time having to restore databases.
Use an SQL transaction to make your changes then back them out.
Before you execute your script:
BEGIN TRANSACTION;
After you execute your script and have done your checking:
ROLLBACK TRANSACTION;
Every change in your script will then be undone.
Note: Make sure you don't have a COMMIT in your script!
Begin the transaction, perform the table operations, and rollback as shown below:
BEGIN TRAN
UPDATE C
SET column1 = 'XXX'
FROM table1 C
SELECT *
FROM table1
WHERE column1 = 'XXX'
ROLLBACK TRAN
This will rollback all the operations performed since the last commit since the beginning of this transaction.

SQL Server Profiler 2005: How to measure execution time of insert statement with trigger?

I want to measure the execution time (using I guess duration from SQL Server Profiler) of an insert statement that has an instead-of insert trigger on it. How do I measure the complete time of this statement including the trigger time?
The execution time (duration) that you see in SQL server profiler for a query is the time it took to execute that query including evaluating any triggers or other constraints.
Because triggers are intended to be used as an alternative way to check data integrity, an SQL statement is not considered to have completed until any triggers have also finished.
Update: An overview of some commonly used SQL Server profiler events:
SQL:BatchCompleted Occurs when a SQL server batch (a group of statements) has completed execution - the duration is the total time to execute the batch.
SQL:StmtCompleted Occurs when a SQL statement executed as part of a batch completes execution - again the duration is the time to execute that single statement.
SP:Completed Occurs when a stored procedure has completed execution - the duration shown is the time to complete execution of the stored procedure.
SP:StmtCompleted Occurs when an SQL statement executed as part of a stored procedure completes.
A batch is a set of SQL statements separated by a GO statement, however to understand the above you should also know that all SQL server commands are executed in the context of a batch*.
Also, each of the above events also has a corresponding Starting event - SP:Starting, SQL:BatchStarting, SQL:StmtStarting and SP:StmtCompleted. These don't list durations (as we don't know the duration yet because its not completed, however do help show when the duration recording starts from).
To better understand the relationship between these events, I recommend that you experiment with capturing some traces of some simple examples (from within SQL Server Management Studio), for example:
SELECT * FROM SomeTable
GO
SELECT * FROM SomeTable
SELECT * FROM OtherTable
GO
SELECT * FROM SomeTable
exec SomeProc
GO
As you should see, for each of the 3 examples above you always get a SQL:BatchStarting and SQL:BatchCompleted, the other event types however provide more detail on the individual commands run.
For this reason I generally tend to use the SQL:BatchCompleted event the most, however if the statement you are attempting to measure is executed as part of a larger batch (or in a stored procedure) then you may find one of the other event classes helpful.
See TSQL Event Category (MSDN) for more information on the various SQL Server Profiling events - there are lots!
Finally, if you are executing this command from within SQL Server Management studio, be aware that the simplest way to record the execution time is to use the client side statistics feature:
(*) I'm pretty sure that everything is executed as part of a batch, although I've not managed to find any evidence on the internet to confirm this.

SQL Server 2005: Why would a delete from a temp table hang forever?

DELETE FROM #tItem_ID
WHERE #tItem_ID.Item_ID NOT IN (SELECT DISTINCT Item_ID
FROM Item_Keyword
JOIN Keyword ON Item_Keyword.Keyword_ID = Keyword.Record_ID
WHERE Keyword LIKE #tmpKW)
The Keyword is %macaroni%. Both temp tables are 3300 items long. The inner select executes in under a second. All strings are nvarchar(249). All IDs are int.
Any ideas? I executed it (it's in a stored proc) for over 12 minutes without it finishing.
Anything that has a lock on the table could prevent the query from proceeding. Do you have any other sessions open in SSMS where you have done something to one of the tables?
You can also use the system sproc sp_who2 to see if there are any locks open. look in the column BlkBy and see if anything is hanging on a lock held by another process.
The classic case where SQL Server "hangs" is when you open a transaction but don't commit or rollback. Don't get so wrapped up in your actual delete (unless you are working with a truly huge dataset) that you fail to consider this possibility.
When i run into issues like this i fire up SQL Heartbeat and it can show me what is causing the conflict. This also shows deadlocks related to transactions that were not closed correctly as Mark states above.
Sounds like you might want to read up on deadlocking...
** Ignore my answer - I'm leaving it up so that Dr. Zim's comment is preserved, but it is incorrect **