I'm running the following query which takes around 1 whole second to execute against 25k records
DECLARE #p GEOGRAPHY = GEOGRAPHY::Point('54.8175053','-2.03567480000003',4326)
SELECT *
FROM mytable
WHERE geoData.STDistance(#p) < 10000
On my remote server, this takes 6-9 seconds.
My I/O speed is faster than my computer, also no resources are being maxed and this is the only query running.
I'm using SQL Server 2008 R2 on both machines.
Could this be a configuration issue? I've noticed that re-running the query doesn't seem to cache anything (takes the exact same amount of time)
Thanks
Related
I am using odbc.eval on KDB to run a SQL stored procedure that generates over tens of millions of rows of data. We have 2 different RDBs (SQL Server 2012 and SQL Server 2016) set up with the same data, allocated memory, etc. The KDB code works against one of them, but doesn't work against the other. For the newer server, KDB crashes midway through the query. The stored procedure seems to work on SQL Management Studio 2016 just fine, though it does take a long time to fully execute - around an hour or so. Could this be a time-out error? Any suggestions for running a SQL query with this large amount of data on KDB without running into memory or timeout issues?
I have a SQL Server 2008 R2 which is 64 bit. Recently I have been observing that there is a huge performance issue in the server. The physical capacity of the server is 28GB. This server is the data warehouse and contains tables having records more than 4 Million.
Whenever a job runs , the memory consumption goes to 98% and the execution rate is extremely slow reaching one record per minute.The job is run through SSIS package.
I want to know what is causing the issue and any help on this
I've created a linked server for an Access database from MSSQL that takes a very long time to return rows. I'm using SQL Server 2005 and the Access database is 2003. A select statement in SSMS of 8 columns and 80,000 rows takes 60s to run.
I gave SQL authenticated credentials ('sa' user) when creating the linked server so I don't think it has to do with permissions, and I'm unsure of what the problem is. I get the exact same running time using OPENQUERY as when I don't. Any guidance to why a simple query would take so long to run is very much appreciated.
My query.
SELECT * FROM AccessDB12...AP
[Cross posted from the Database Administrators site, in the hope that it may gain better traction here. I'll update either site as appropriate.]
I have a stored procedure in SQL Server 2005 (SP2) which contains a single query like the following (simplified for clarity)
SELECT * FROM OPENQUERY(MYODBC, 'SELECT * FROM MyTable WHERE Option = ''Y'' ')
OPTION (MAXDOP 1)
When this proc is run (and run only once) I can see the plan appear in sys.dm_exec_query_stats with a high 'total_worker_time' value (eg. 34762.196 ms). This is close to the elapsed time. However, in SQL Management Studio the statistics show a much lower CPU time, as I'd expect (eg. 6828 ms). The query takes some time to return, because of the slowness of the server it is talking to, but it doesn't return many rows.
I'm aware of the issue that parallel queries in SQL Server 2005 can present odd CPU times, which is why I've tried to turn off any parallism with the query hint (though I really don't think that there was any in any case).
I don't know how to account for the fact that the two ways of looking at CPU usage can differ, nor which of the two might be accurate (I have other reasons for thinking that the CPU usage may be the higher number, but it's tricky to measure). Can anyone give me a steer?
UPDATE: I was assuming that the problem was with the OPENQUERY so I tried looking at times for a long-running query which doesn't use OPENQUERY. In this case the statistics (gained by setting STATISTICS TIME ON) reported the CPU time at 3315ms, whereas the DMV gave it at 0.511ms. The total elapsed times in each case agreed.
total_worker_time in sys.dm_exec_query_stats is cumulative - it is the total execution time for all the executions of the currently compiled version of the query - see execution_count for the number of executions this represents.
See last_worker_time, min_worker_time or max_worker_time for timing of individual executions.
reference: http://msdn.microsoft.com/en-us/library/ms189741.aspx
I have two servers I'm doing development on and I'm not a DBA, but we don't have one so I'm trying to figure out some performance issues I'm having. Locally I have SQL Server 2008 R2 installed and when an ORM that I'm using runs a query it returns the results in less than a second. When I run that exact same query on our development server with is SQL Server 2005, it takes over a minute. I've looked at the execution plan on both of them the main thing that sticks out is the last two lines of the query has a order by statement. On the 2005 server this is 100% of the cost. on the 2008 server its 0% of the cost. Is there some sort of setting I'm overlooking? Both servers have approximately the same data in them and the same indexes/keys/etc.....since the local copy is just a restore from a backup.
My best guess is the 2005 server is sorting all the tables and then giving me the results (200 lines). Where the 2008 server is getting all the results and then sorting them. (200 results also.)
Link to slow execution plan: http://pastebin.com/sUCiVk8j
Link to fast execution plan: http://pastebin.com/EdR7zFAn
I would post the query but it is obnoxiously long because I have a bunch of includes and its Entity Framework that is generating the query.
Thank you in advance.
Edit: I opened Task manager on the SQL server this is running on and the CPU goes to 100% during the execution of this query.
Edit: Added XML version to jsfiddle.net. pastebin wouldn't allow me to because of the size. Just used the CSS window for the XML.
Actual 2008R2: http://jsfiddle.net/wgsv6/2/
Actual 2005: http://jsfiddle.net/wgsv6/3/
Hard to tell without seeing the query, but is it possible you are missing an INDEX on the slow server?
THe statistics could be out of date on the dev server.