Is there any way to safely run SELECT INTO? - sql

I have a script that runs a SELECT INTO into a table. To my knowledge, there are no other procedures that might be concurrently referencing/modifying this table. Once in awhile, however, I get the following error:
Schema changed after the target table was created. Rerun the Select
Into query.
What can cause this error and how do I avoid it?
I did some googling, and this link suggests that SELECT INTO cannot be used safely without some crazy try-catch-retry logic. Is this really the case?
I'm using SQLServer 2012.

Unless you really don't know the fields and data types in advance, I'd recommend first creating the table, then adding the data with an Insert statement. In your link, David Moutray suggests the same thing, here's his example code verbatim:
CREATE TABLE #TempTableY (ParticipantID INT NOT NULL);
INSERT #TempTableY (ParticipantID)
SELECT ParticipantID
FROM TableX;

Related

SQL - Table not found after backup

I saved a SQL table before deleting some information from it with the sql statment:
select * into x_table from y_table
After doing some operations, I want to get back some information from the table I saved with the query above. Unfortunately, MS SQL Server MGMTS shows an error saying that the table does not exist.
However, when I put the drop statement, the table is recognized - and the table is not underlined.
Any idea why this table is recognized by the drop table statement and not the select from statement. This seems strange for me.
EDIT:
Thank you
It may be that the table isn't underlined in your drop table command because its name is still in your IntelliSense cache. Select Edit -> IntelliSense -> Refresh Local Cache in SSMS (or just press Ctrl+Shift+R) and see if the table name is underlined then.
Edit:
Another possibility is that your drop table command might be in the same batch as another statement that creates the table, in which case SSMS won't underline it because it knows that even though the table doesn't exist now, it will exist by the time that command is executed. For instance:
None of the tables one, two, or three existed in my database when I took this screenshot. If I highlight line 6 and try to run it by itself, it will fail. Yet you can see that two is not underlined on line 6 because SSMS can see that if I run the whole script, the table will be created on line 5. On the other hand, three is underlined on line 9 because I commented out the code that would have created it on line 8.
All of that said, I think we might be making too much of this problem. If you try to select from a table and SQL Server tells you it doesn't exist, then it doesn't exist. You can't rely on IntelliSense to tell you that it does; the two examples above are probably not the only ways that IntelliSense might mislead you about the current status of a table.
If you want the simplest way to know whether an object with a given name (like x_table) exists, just use:
select object_id('x_table');
If this query returns null, x_table doesn't exist, regardless of what IntelliSense is telling you. If it returns non-null, then there is some object out there with that name, and then the real question is why your select statement is failing. And to answer that, I'd need to see the statement.
A lot of posts like this, you have to copy in 2 statements :
CREATE TABLE newtable LIKE oldtable;
INSERT newtable SELECT * FROM oldtable;

Cannot drop table created within same script

I have a script in which I create a temporary table, that I subsequently want to drop.
I simply create the table, fill it using a INSERT INTO statement, but when it comes to dropping it, the script fails stating that the table is in use.
From reading around it would seem to be caused by the transaction managment, but I'm a little confused.
Here is a little script that reproduces the issue:
CREATE TABLE SCRIPT_TEMP (
NAME VARCHAR(100) NOT NULL,
USERNAME VARCHAR(150) NOT NULL);
COMMIT WORK;
INSERT INTO SCRIPT_TEMP (NAME, USERNAME)
SELECT NAME, COALESCE(USERNAME, 'empty')
FROM SALESREPS;
COMMIT WORK;
DROP TABLE SCRIPT_TEMP;
COMMIT WORK;
Or, in order to be easily testable to anyone without a SALESREPS table, use this insert statement :o)
INSERT INTO SCRIPT_TEMP (NAME, USERNAME)
SELECT 'Name 1', 'Username 1'
FROM RDB$DATABASE;
COMMIT WORK;
I fail to see what still holds a reference to the SCRIPT_TEMP table by the time the drop call is made. Why would the script's own transaction block it even after the second COMMIT?
If I split the execution in 2 scripts everything is fine.
What am I missing??
Thanks!!
PS: using Firebird 2.5.2, in case that matters
PPS: my script is a bit more involved than this. THe temp table is filled with table names and related constraints that need to be manipulated, but that's not the issue, that part is working well. And the fact is, the issue I want to resolve is easily reproducible with the code sample here, that I got to while debugging. This SO question seems to be about the exact same problem, but the only answer is not helpful to the problem itself
Thanks to Val Marinov for pointing me in the right direction.
It appears my issue is due to my SQL Manager - there must be a setting I'm not aware of.
Script runs as expected in FlameRobin.

How to figure out why rows are disappearing from my SQL server?

I'm working with a SQL Server 2008 installation that was maintained for years by another team of programmers.
I'm having a problem that rows of data seem to be mysteriously disappearing from a specific table in my server.
I would like to be able to set up some sort of monitoring system that would tell me when the table is modified, and a summary of the modification.
I think that "triggers" might be what I'm looking for, but I've never used them before. Are triggers what I want to use, and if so, what is a good resource for learning to use them? Is there a better solution?
I think that I should mention that the table I'm referring to is not that frequently updated, so I don't think that adding a little bit of overhead should be a big deal, but I would prefer a solution that I can brush away once the problem is resolved.
A FOR DELETE trigger could help you capture the rows that are being deleted. You could create an audit table (copy of the table that you'd like to monitor) and then add this code to your trigger:
INSERT INTO [Your Audit Table]
SELECT * FROM deleted
I've also seen some "more advanced" scenarios involving FOR XML.
I don't know that the trigger would help determine who is deleting the records, but you might be able to PROVE that the records are being deleted, and perhaps what time, etc. That could help you troubleshoot further.
The following sample should be a basic idea of what you're looking for.
CREATE TABLE MyTestTable(col1 int, col2 varchar(10));
GO
CREATE TABLE MyLogTable(col1 int, col2 varchar(10), ModDate datetime, ModBy varchar(50));
GO
CREATE TRIGGER tr_MyTestTable_IO_UD ON MyTestTable AFTER UPDATE, DELETE
AS
INSERT MyLogTable
SELECT col1, col2, GETDATE(), SUSER_SNAME()
FROM deleted;
GO
Insert MyTestTable Values (1, 'aaaaa');
Insert MyTestTable Values (2, 'bbbbb');
UPDATE MyTestTable Set col2 = 'bbbcc' WHERE col1 = 2;
DELETE MyTestTable;
GO
SELECT * FROM MyLogTable;
GO
However, keep in mind that there are still ways of deleting records that won't be caught by a trigger. (TRUNCATE TABLE and various bulk update commands.)
Another solution would be to attach Sql Profiler to the database with specific conditions. This will log every query run for your inspection.
I like to stay away from triggers but they could help for your problem like Draghon said
I think you have it figured out. A trigger is likely your best bet as it's as close to the data as you can get. Inspecting the code (programming or even a stored procedure) would not give you as much an assurance as a trigger would; a Delete trigger in this case.
Check out this article: http://www.go4expert.com/forums/showthread.php?t=15510

Debugging sub-queries in TSQL Stored Procedure

How do I debug a complex query with multiple nested sub-queries in SQL Server 2005?
I'm debugging a stored procedure and trigger in Visual Studio 2005. I'd like to be able to see what the results of these sub-queries are, as I feel that this is where the bug is coming from. An example query (slightly redacted) is below:
UPDATE
foo
SET
DateUpdated = ( SELECT TOP 1 inserted.DateUpdated FROM inserted )
...
FROM
tblEP ep
JOIN tblED ed ON ep.EnrollmentID = ed.EnrollmentID
WHERE
ProgramPhaseID = ( SELECT ...)
Visual Studio doesn't seem to offer a way for me to Watch the result of the sub query. Also, if I use a temporary table to store the results (temporary tables are used elsewhere also) I can't view the values stored in that table.
Is there anyway that I can add a watch or in some other way view these sub-queries? I would love it if there was some way to "Step Into" the query itself, but I imagine that wouldn't be possible.
Ok first I would be leary of using subqueries in a trigger. Triggers should be as fast as possible, so get rid of any correlated subqueries which might run row by row instead of in a set-based fashion. Rewrite to joins. If you only want to update records based on what was in the inserted table, then join to it. Also join to the table you are updating. Exactly what are you trying to accomplish with this trigger? It might be easier to give advice if we understood the business rule you are trying to implement.
To debug a trigger this is what I do.
I write a script to:
Do the actual insert to the table
without the trigger on on it
Create a temp table named #inserted
(and/or one named #deleted)
Populate the table as I would expect
the inserted table in the trigger to
be populated from the insert you do.
Add the trigger code (minus the
create or alter trigger parts)
substituting #inserted every time I
reference inserted. (if you plan to
run multiple times until you are
ready to use it in a trigger throw
it in an explicit transaction and
rollback after checking your
results.
Add a query to check the table(s)
you are changing with the trigger for
the values you wanted to change.
Now if you need to add debug
statements to see what is happening
between steps, you can do so.
Run making changes until you get the
results you want.
Once you have the query working as
you expect it to, it is easy to take
the # signs off inserted and use it
to create the body of the trigger.
This is what I usually do in this type of scenerio:
Print out the exact sqls getting generated by each subquery
Then run each of then in the Management Studio as suggested above.
You should check if different parts are giving you the right data you expect.

How can remove lock from table in SQL Server 2005?

I am using the Function in stored procedure , procedure contain transaction and update the table and insert values in the same table , while the function is call in procedure is also fetch data from same table.
i get the procedure is hang with function.
Can have any solution for the same?
If I'm hearing you right, you're talking about an insert BLOCKING ITSELF, not two separate queries blocking each other.
We had a similar problem, an SSIS package was trying to insert a bunch of data into a table, but was trying to make sure those rows didn't already exist. The existing code was something like (vastly simplified):
INSERT INTO bigtable
SELECT customerid, productid, ...
FROM rawtable
WHERE NOT EXISTS (SELECT CustomerID, ProductID From bigtable)
AND ... (other conditions)
This ended up blocking itself because the select on the WHERE NOT EXISTS was preventing the INSERT from occurring.
We considered a few different options, I'll let you decide which approach works for you:
Change the transaction isolation level (see this MSDN article). Our SSIS package was defaulted to SERIALIZABLE, which is the most restrictive. (note, be aware of issues with READ UNCOMMITTED or NOLOCK before you choose this option)
Create a UNIQUE index with IGNORE_DUP_KEY = ON. This means we can insert ALL rows (and remove the "WHERE NOT IN" clause altogether). Duplicates will be rejected, but the batch won't fail completely, and all other valid rows will still insert.
Change your query logic to do something like put all candidate rows into a temp table, then delete all rows that are already in the destination, then insert the rest.
In our case, we already had the data in a temp table, so we simply deleted the rows we didn't want inserted, and did a simple insert on the rest.
This can be difficult to diagnose. Microsoft has provided some information here:
INF: Understanding and resolving SQL Server blocking problems
A brute force way to kill the connection(s) causing the lock is documented here:
http://shujaatsiddiqi.blogspot.com/2009/01/killing-sql-server-process-with-x-lock.html
Some more Microsoft info here: http://support.microsoft.com/kb/323630
How big is the table? Do you have problem if you call the procedure from separate windows? Maybe the problem is related to the amount of data the procedure is working with and lack of indexes.