SQL Server Execution Plan Question - sql

I have two servers I'm doing development on and I'm not a DBA, but we don't have one so I'm trying to figure out some performance issues I'm having. Locally I have SQL Server 2008 R2 installed and when an ORM that I'm using runs a query it returns the results in less than a second. When I run that exact same query on our development server with is SQL Server 2005, it takes over a minute. I've looked at the execution plan on both of them the main thing that sticks out is the last two lines of the query has a order by statement. On the 2005 server this is 100% of the cost. on the 2008 server its 0% of the cost. Is there some sort of setting I'm overlooking? Both servers have approximately the same data in them and the same indexes/keys/etc.....since the local copy is just a restore from a backup.
My best guess is the 2005 server is sorting all the tables and then giving me the results (200 lines). Where the 2008 server is getting all the results and then sorting them. (200 results also.)
Link to slow execution plan: http://pastebin.com/sUCiVk8j
Link to fast execution plan: http://pastebin.com/EdR7zFAn
I would post the query but it is obnoxiously long because I have a bunch of includes and its Entity Framework that is generating the query.
Thank you in advance.
Edit: I opened Task manager on the SQL server this is running on and the CPU goes to 100% during the execution of this query.
Edit: Added XML version to jsfiddle.net. pastebin wouldn't allow me to because of the size. Just used the CSS window for the XML.
Actual 2008R2: http://jsfiddle.net/wgsv6/2/
Actual 2005: http://jsfiddle.net/wgsv6/3/

Hard to tell without seeing the query, but is it possible you are missing an INDEX on the slow server?

THe statistics could be out of date on the dev server.

Related

SSIS performance vs OpenQuery with Linked Server from SQL Server to Oracle

We have a linked server (OraOLEDB.Oracle) defined in the SQL Server environment. Oracle 12c, SQL Server 2016. There is also an Oracle client (64 bit) installed on SQL Server.
When retrieving data from Oracle (a simple query, getting all columns from a 3M row, fairly narrow table, with varchars, dates and integers), we are seeing the following performance numbers:
sqlplus: select from Oracle > OS File on the SQL Server itself
less than 2k rows/sec
SSMS: insert into a SQL Server table select from Oracle using OpenQuery (passthrough to Oracle, so remote execution)
less than 2k rows/sec
SQL Export/Import tool (in essence, SSIS): insert into a SQL Server table, using the OLEDB Oracle for source and OLEDB SQL Server for target
over 30k rows/second
Looking for ways to improve throughput using OpenQuery/OpenResultSet, to match SSIS throughput. There is probably some buffer/flag somewhere that allows to achieve the same?
Please advise...
Thank you!
--Alex
There is probably some buffer/flag somewhere that allows to achieve the same?
Probably looking for the FetchSize parameter
FetchSize - specifies the number of rows the provider will fetch at a
time (fetch array). It must be set on the basis of data size and the
response time of the network. If the value is set too high, then this
could result in more wait time during the execution of the query. If
the value is set too low, then this could result in many more round
trips to the database. Valid values are 1 to 429,496, and 296. The
default is 100.
eg
exec sp_addlinkedserver N'MyOracle', 'Oracle', 'ORAOLEDB.Oracle', N'//172.16.8.119/xe', N'FetchSize=2000', ''
See, eg https://blogs.msdn.microsoft.com/dbrowne/2013/10/02/creating-a-linked-server-for-oracle-in-64bit-sql-server/
I think there are many way to enhance the performance on the INSERT query, I suggest reading the following article to get more information about data loading performance.
The Data Loading Performance Guide
There are one method you can try which is minimizing the logging by using clustered index. check the link below for more information:
New update on minimal logging for SQL Server 2008

SQL Select Query Timing Out

I have a table in my SQL Server 2008 database called dbo.app_additional_info which contains approximately 130,000 records. Below shows the structure of the table.
When I run a query like the one below in SQL Server Management Studio 2008
select app_additional_text
from app_additional_info
where application_id = 2665 --Could be any ID here
My query takes a long time to execute (up to 5minutes) and sometimes it times out. This database is also connected to a Web Application and when it runs the above query, I always get a timeout error.
Is there anything I can do to speed up the performance of my query?
Your help with this would be greatly appreciated as this is grinding my web application to a halt.
Thanks.
Update
Below shows my execution plan from SSMS (I apologise for poor quality)
based on the limited info in the question, it looks like you are doing a table scan because there is no index on application_id. So, try this:
CREATE INDEX IX_app_additional_info_application_id on
app_additional_info (application_id)
your query should run much faster now.

Performance discrepancy in SQL Server Profiler between web query and the same query ran in SSMS

I have a query that takes ~20x as long for SQL server to execute when it comes from a web request, as it does from when the exact same query is ran via SQL Server Management Studio
The following screenshot is from the SQL Server Profiler. The first two records relate to the receipt and execution of query that's come in via the web request, whilst the third record is the exact same query run from SSMS. Why would there be a such a huge difference between the two?
A point: The query is generated from LINQ. I took a copy of the generated SQL and ran it in SSMS to get these results.
ARITHABORT is ON by default in SSMS and OFF by default for a SqlClient connection.
See if this solves your issue:
new SqlCommand("SET ARITHABORT ON", connection).ExecuteNonQuery();

Mysql to SQL Server: is there something similar to query cache?

I'm trying to optimize a SQL Server. I have some experience with Mysql and one of the things that usually help is to enable query cache, that will basically cache query results as long as you are running the same query.
Is there something similar to this on SQL Server? Could you please point what is the name of this feature?
Thanks!
SQL Server doesn't cache result sets per se, but it does cache data pages which have been read, in addition to caching query execution plans. This means that if it has to read the same data pages again to answer a query, it will be faster since there are fewer physical reads (from disk) but you should still see the same amount of logical reads.
Here is an article with more details.
SQL Server will cache query results, but it's a little more complicated than in MySQL's case (since SQL Server provides ACID guarantees that MySQL does not - at least, not with MyISAM). But you'll definitely find that the second time you execute a query on SQL Server, it'll be faster than the first time (as long as no other changes have happened).
There's no specific name for this feature, that I'm aware of. It's more a combination of caches...

SQL Server 2008 - Bit Param Evaluation alters Execution Plan

I have been working on migrating some of our data from Microsoft SQL Server 2000 to 2008. Among the usual hiccups and whatnot, I’ve run across something strange. Linked below is a SQL query that returns very quickly under 2000, but takes 20 minutes under 2008. I have read quite a bit on upgrading SQL server and went down the usual paths of checking indexes, statistics, etc. before coming to the conclusion that the following statement, found in the WHERE clause, causes the execution plan for the steps that follow this statement to change dramatically:
And (
#bOnlyUnmatched = 0 -- offending line
Or Not Exists(
The SQL statements and execution plans are linked below.
A coworker was able to rewrite a portion of the WHERE clause using a CASE statement, which seems to “trick” the optimizer into using a better execution plan. The version with the CASE statement is also contained in the linked archive.
I’d like to see if someone has an explanation as to why this is happening and if there may be a more elegant solution than using a CASE statement. While we can work around this specific issue, I’d like to have a broader understanding of what is happening to ensure the rest of the migration is as painless as possible.
Zip file with SQL statements and XML execution plans
Thanks in advance!
We experienced similar problems a few years back in our migration from 2000 to 2005. The error we were seeing was actually an invalid cast error. I think I found the thread here
The query optimiser has much more freedom in SQL Server >=2005. The CASE solution is probably the best route.