On my SQL Server, I have a query which does not produce any rows and this select statement runs for about a minute. Now I don't understand why the insert would not finish forever. The destination table has only 48k rows. I don't have rights to run any kinds of tracing or any other diagnostic queries that can help this. What else can I try?
Turn your insert into a select so you can see what you're trying to insert. Then take parts away from your SQL statement (joins, sub queries, etc.) until it starts running quickly. The last thing you removed was the cause of the slowness. Without an example we can't give you more specific help that. The process of writing a https://stackoverflow.com/help/mcve will probably help you answer this yourself.
Related
In SQL Server Management Studio, when running a query that produces a very large resultset, it appears to sometimes display the results of the resultset as it's loading them, rather than them all appearing at once.
My normal assumption would be that it's simply it populating the grid(s) in SSMS with the results of the finished query, and that the SQL query itself is finished.
However, if I run the following query:
SELECT 1
SELECT * FROM EnormousTable
INSERT INTO SomeOtherTable([Column1]) SELECT 'Test3'
That last INSERT does not occur until after the results from the larger resultset have been fully returned.
I have two main questions:
1. What is happening here? Is SSMS breaking down the query into separate batches even without GO statements? Please note that I'm not a DBA, so if there's some fundamental reason for this behaviour that 'any DBA would know', there's a good chance I don't know it.
2. Is there a way to attain similar functionality in .NET? What I mean by this is, when running a set of queries that will produce multiple resultsets, whether or not it's possible to have a DataSet get populated with the results of each successive query as it finishes (without waiting for all the queries to finish), without me having to manually break down the query (unless that's what SSMS is actually doing under the hood).
How do I make SQL Server commit inserts in chunks? I need to copy a large amount of rows from an old db into a new table, and it has several problems:
it takes ages to finish, and I don't see any rows in my table until the entire transaction is finished.
my log file is growing like crazy and it will probably run out of space.
if something breaks in the middle, I have to repeat everything.
If I add SET ROWCOUNT 500, I can limit the number of rows, but I don't know how to continue with the last inserted ID. I might query the new table to see what got inserted last, but I am not sure if that's the right thing to do. And it's a bit difficult because my where clause does not use the ID column, so I am not sure how to know exactly where to continue.
What's the best approach for this? Is there a "for loop" or something which would allow me to commit every once in a while?
I am using SSMS for SQL Server 2008 R2.
Even if TomTom's answer is sarcastic, it contains two basic options, which may help you:
You can write a loop in T-SQL (see for example while) and use TOP and OFFSET to select chunks (you need an order by). You can minimize looging according to Microsoft. And if you just worry about restarting without redoing everything this should be fine, though I don't expect it to be fast.
You can export your selection to a file and use the bulk insert to load it.
Some more options you may find here (About Bulk Import and Bulk Export Operations) and here (INSERT Section Best Practices).
How do I make SQL Server commit inserts in chunks?
You program code that inserts it in chunks, quite easy.
1: Yes, that is how transactions work, you know.
2: Yes, that is how transactions work, you know.
3: Yes, guess what - THAT IS HOW TRANSACTIONS WORK, you know ;)
but I don't know how to continue with the last inserted ID
Called programming. Generate ID's on client side. Remember the last one you generated.
In general, I would advice not to INSERT in code - depending how your code works, this sounds like a data copy operation, and there are other mechanisms for that (bulk copy interface).
I am using SSMS for SQL Server 2008 R2
Basically it behaves as you program it. You can easily put in some loop in the SSMS side, or export to a file, then bulk insert the file. Wont help you with item 2 though... unless you go to simple backup model and do not care about restoring.
Also 3 is complex then - how do you properly restart? Lots of additional code.
The best approach depends on the circumstances, and sadly you are way too vague to make more than a blind guess.
I have been working on migrating some of our data from Microsoft SQL Server 2000 to 2008. Among the usual hiccups and whatnot, I’ve run across something strange. Linked below is a SQL query that returns very quickly under 2000, but takes 20 minutes under 2008. I have read quite a bit on upgrading SQL server and went down the usual paths of checking indexes, statistics, etc. before coming to the conclusion that the following statement, found in the WHERE clause, causes the execution plan for the steps that follow this statement to change dramatically:
And (
#bOnlyUnmatched = 0 -- offending line
Or Not Exists(
The SQL statements and execution plans are linked below.
A coworker was able to rewrite a portion of the WHERE clause using a CASE statement, which seems to “trick” the optimizer into using a better execution plan. The version with the CASE statement is also contained in the linked archive.
I’d like to see if someone has an explanation as to why this is happening and if there may be a more elegant solution than using a CASE statement. While we can work around this specific issue, I’d like to have a broader understanding of what is happening to ensure the rest of the migration is as painless as possible.
Zip file with SQL statements and XML execution plans
Thanks in advance!
We experienced similar problems a few years back in our migration from 2000 to 2005. The error we were seeing was actually an invalid cast error. I think I found the thread here
The query optimiser has much more freedom in SQL Server >=2005. The CASE solution is probably the best route.
I've an SSIS package that runs a stored proc for exporting to an excel file. Everything worked like a champ until I needed to a do a bit of rewriting on the stored proc. The proc now takes about 1 minute to run and the exported columns are different, so my problems are the following;
1) SSIS complains when I hit the preview button "No column information returned by command"
2) It times out after about 30 seconds.
What I've done.
Tried to clean up/optimize the query. That helped a bit, but it still is doing some major calculations and it runs just fine in SSMS.
Changed the timeout values to 90 seconds. Didn't seem to help. Maybe someone here can?
Thanks,
Found this little tidbit which helped immensely.
No Column Names
Basically all you need to do is add the following to your SQL query text in SSIS.
SET FMTONLY OFF
SET NOCOUNT ON
Only problem now is it runs slow as molasses :-(
EDIT: It's running just too damn slow.
Changed from using #tempTable to tempTable. Adding in appropriate drop statements. argh...
Although it appears you may have answered part of your own question, you are probably getting the "No column information returned by command" error because the table doesn't exist at the time it tries to validate the metadata. Creating the tables as non-temporary tables resolves this issue.
If you insist on using temporary tables, you can create the temporary tables in the step preceeding the data flow. You would need to create it as a ## table and turn off connection sharing for the connection for this to work, but it is an alternative to creating permanent tables.
A shot in the dark based on something obscure I hit years ago: When you modified the procedure, did you add a call to a second procedure? This might mess up SSIS's ability to determine the returned data set.
As for (2), does the procedure take 30+ or 90+ seconds to run in SSMS? If not, do you know that the query is actually getting into SQL from SSIS? Might be worth firing up SQL Profiler to see what's actually being sent to SQL Server. [Which was the way I found out my obscure factoid.]
When the SQL Server (2000/2005/2008) is running sluggish, what is the first command that you run to see where the problem is?
The purpose of this question is that, when all answers are compiled, other users can benefit by running your command of choice to segregate where the problem might be.
There are other troubleshooting posts regarding SQL Server performance but they can be useful only for specific cases.
If you roll out and run your own custom SQL script,
then would you let others know what
the purpose of the script is
it returns (return value)
to do to figure out where problem is
If you could provide source for the script, please post it.
In my case,
sp_lock
I run to figure out if there are any locks (purpose) to return SQL server lock information. Since result set displays object IDs (thus not so human readable), I would usually skim through result to see if there are abnormally many locks.
Feel free to update tags
Why run a single query when a picture is worth a thousand words!
I prefer to run the freely avaialable Performance Dashboard Reports.
They provide a complete snapshot overview of your servers performance in seconds. You can then choose the a specific area to investigate (locking, currently running queries, wait requests etc.) simply by clicking the apporpriate area on the Dashboard.
http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en
One slight caveat, I beleive these are only available in SQL 2005 and above.
sp_who
http://msdn.microsoft.com/en-us/library/aa260384(SQL.80).aspx
I want to see "who", what machines/users are running what queries, length of time, etc. I can also easily scan for blocks.
If something is blocking a bunch of other transactions I can use the spid to issue a kill command if necessary.
sp_who_3 - Provides a lot of information available elsewhere but in one nice output. Also has several parameters to allow customized output.
A custom query which combines what you would expect in sp_who with DBCC INPUTBUFFER(spid) to get the last query text on each spid ordered by the blocked/blocking graph.
Process data is avaliable via master..sysprocesses.
sp_who3 returns standand sp_who2 output, until you specify a specific spid, then gives 6 different recordsets about that spid including locks, blocks, what it's currently doing, the T/SQL it's running, and the statement within the T/SQL that is currently running.
Ian Stirk has a great script I like to use as detailed in this article: http://msdn2.microsoft.com/en-ca/magazine/cc135978.aspx
In particular, I like the missing indexes one:
SELECT
DatabaseName = DB_NAME(database_id)
,[Number Indexes Missing] = count(*)
FROM sys.dm_db_missing_index_details
GROUP BY DB_NAME(database_id)
ORDER BY 2 DESC;
DBCC OPENTRAN to see what the oldest active transaction is
Displays information about the oldest
active transaction and the oldest
distributed and nondistributed
replicated transactions, if any,
within the specified database. Results
are displayed only if there is an
active transaction or if the database
contains replication information. An
informational message is displayed if
there are no active transactions.
followed by sp_who2
I use queries like those:
Number of open/active connections in ms sql server 2005