I have a procedure that fills up a bunch of regular and temp tables with millions of records and takes hours to complete. Its no problem when I run it alone.
However, I've been trying to improve performance and am trying a SQL Profiler tuning trace. Now the procedure bombs every time with :
Could not continue scan with NOLOCK due to data movement.
If I turn off SQL Profiler it works again. My system is SQL R2 64 SP2 on a Dell Precision T5400 quad Xeon, 8gb RAM and plenty of storage capacity (3tb) on W2k8 Server 64 (latest sp).
The solutions to this query was to reduce the complexity of the query. It was inserting millions of records in one big load. I broke it into several smaller loads and this error went away.
Related
We had a SQL 2005 server running for XML EXPLICIT queries quite happily with no performance issues. The machine (a Windows 2003 server) has unfortunately died so I've had to do an emergency provision of a Windows 2012 box. The databases files have been reattached to a 2008r2 and "work". However the queries are horrendously slow. 5 seconds per query when previously they were in the .x times. This makes the websites that they power unusable.
I've rebuilt all the indexes and I've run DBCC FREEPROCCACHE on all machines but this has had no noticable effect. What else can I look at ? I can't run them on the 2016 SQL instance on the box because some of the queries use non-ANSI *= joins (I said it was old!).
If your query was running fine before, consider what else have changed - the query planner and actual execution plan might help to pinpoint this.
When you say you are joining, have you considered how much you join? If the new machine have more data in the database, then a join might quickly become prohibitively expensive. This can be done by reducing the data you need, as less datahandling means less workload.
Is there something you can pre-calculate before you run your query, or otherwise change to make it run faster?
I assume you do a SELECT, but if you UPDATE or DELETE data, the indexes also need to be recalculated, which takes a long time (in this case, disable the index, insert all the needed data and then recalculate the index)
You don't mention any XML handling, but have marked the for-xml tag. If your join is performed on XML data, using Xquery to get the data might also give a boost to performance.
I have a SQL Server 2008 R2 which is 64 bit. Recently I have been observing that there is a huge performance issue in the server. The physical capacity of the server is 28GB. This server is the data warehouse and contains tables having records more than 4 Million.
Whenever a job runs , the memory consumption goes to 98% and the execution rate is extremely slow reaching one record per minute.The job is run through SSIS package.
I want to know what is causing the issue and any help on this
Good day.
Need to get records from an Oracle database to a database in SQL Server. The data source type (ODBC) the performed using a SQL command, where I am taking all possible indices according to my requirement. The process runs fine, the problem is that it takes a long time and I need to be something quick. The process can not be performed with lookup, requires merge or merge join, simply load a table from Oracle to SQL under certain conditions.
Thank you for your help
Check what is your limiting factor. Generally there are 3 points to check:
Remote server is slow.
Source DB can run low on memory, read speed or free CPU. Substitute you query with a straight SELECT statement with no WHERE clause or JOINs and see if your SSIS package runs faster.
Target DB.
You may have indexes enabled, high write latency on HDD or not enough CPU.
Run an INSERT for your target table and see how longer it takes.
Problem may be in the middle: transfer between 2 servers. Network usually is main bottleneck. Is SSIS hosted on the same server as SQL server? then you have 2 network connections + possible hardware bottleneck on dedicated SSIS machine.
Depending on the bottleneck there are different solutions.
If you have network capacity and bottleneck is 1 CPU per query on Oracle, then you can partition your data horisontally (IDs 1 to 100, 101 to 200 etc); establish multiple connections to Oracle and load data in several streams. Number of streams is 1 less then number of CPUs on Oracle, SSIS or SQL Server (which ever is smaller).
[Cross posted from the Database Administrators site, in the hope that it may gain better traction here. I'll update either site as appropriate.]
I have a stored procedure in SQL Server 2005 (SP2) which contains a single query like the following (simplified for clarity)
SELECT * FROM OPENQUERY(MYODBC, 'SELECT * FROM MyTable WHERE Option = ''Y'' ')
OPTION (MAXDOP 1)
When this proc is run (and run only once) I can see the plan appear in sys.dm_exec_query_stats with a high 'total_worker_time' value (eg. 34762.196 ms). This is close to the elapsed time. However, in SQL Management Studio the statistics show a much lower CPU time, as I'd expect (eg. 6828 ms). The query takes some time to return, because of the slowness of the server it is talking to, but it doesn't return many rows.
I'm aware of the issue that parallel queries in SQL Server 2005 can present odd CPU times, which is why I've tried to turn off any parallism with the query hint (though I really don't think that there was any in any case).
I don't know how to account for the fact that the two ways of looking at CPU usage can differ, nor which of the two might be accurate (I have other reasons for thinking that the CPU usage may be the higher number, but it's tricky to measure). Can anyone give me a steer?
UPDATE: I was assuming that the problem was with the OPENQUERY so I tried looking at times for a long-running query which doesn't use OPENQUERY. In this case the statistics (gained by setting STATISTICS TIME ON) reported the CPU time at 3315ms, whereas the DMV gave it at 0.511ms. The total elapsed times in each case agreed.
total_worker_time in sys.dm_exec_query_stats is cumulative - it is the total execution time for all the executions of the currently compiled version of the query - see execution_count for the number of executions this represents.
See last_worker_time, min_worker_time or max_worker_time for timing of individual executions.
reference: http://msdn.microsoft.com/en-us/library/ms189741.aspx
I have two servers I'm doing development on and I'm not a DBA, but we don't have one so I'm trying to figure out some performance issues I'm having. Locally I have SQL Server 2008 R2 installed and when an ORM that I'm using runs a query it returns the results in less than a second. When I run that exact same query on our development server with is SQL Server 2005, it takes over a minute. I've looked at the execution plan on both of them the main thing that sticks out is the last two lines of the query has a order by statement. On the 2005 server this is 100% of the cost. on the 2008 server its 0% of the cost. Is there some sort of setting I'm overlooking? Both servers have approximately the same data in them and the same indexes/keys/etc.....since the local copy is just a restore from a backup.
My best guess is the 2005 server is sorting all the tables and then giving me the results (200 lines). Where the 2008 server is getting all the results and then sorting them. (200 results also.)
Link to slow execution plan: http://pastebin.com/sUCiVk8j
Link to fast execution plan: http://pastebin.com/EdR7zFAn
I would post the query but it is obnoxiously long because I have a bunch of includes and its Entity Framework that is generating the query.
Thank you in advance.
Edit: I opened Task manager on the SQL server this is running on and the CPU goes to 100% during the execution of this query.
Edit: Added XML version to jsfiddle.net. pastebin wouldn't allow me to because of the size. Just used the CSS window for the XML.
Actual 2008R2: http://jsfiddle.net/wgsv6/2/
Actual 2005: http://jsfiddle.net/wgsv6/3/
Hard to tell without seeing the query, but is it possible you are missing an INDEX on the slow server?
THe statistics could be out of date on the dev server.