I have two databases on the same Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)
The CIKK table merge replicated. There is a same FULL text indexes on both.
All of the indexes (Full text also) are the same.
I run the same query on both
select cikkszam
from cikk
where delstatus=1
and webmegjel=1
and contains(*,'"spi*"')
On the master database it takes 120 sec. on the Slave database it takes 0 sec.
The results are the same.
The problems is the query plan.
On the slow one, firstly uses an indexes (delstatus, webmegjel) and the result set is checked with the full text index. On the fast one it does the opposite.
A saved the query plans and the statistic plans also, but Stackoverflow only allow 2 links, so here the sqlplans.
https://www.dropbox.com/s/zvcizijn1yxlrvj/plan1.sqlplan
https://www.dropbox.com/s/4yi0c1q2ly8spsk/plan2.sqlplan
What can be the solution to this?
Related
Some programm generate and send queries to sql server(on high load production). I want take plan of concrete query of concrete table. I start profiler with "Showplan XML" and set filter on TextData(like %MyTable%) and DatabaseName. It show rows with xml in TextData that describe execution plans(for all queries of my table). But I know that exist 5 different sql queries for this table.
How I can match some concrete query with correspond plan without use statistic?
Is there a reason this has to be done on the production environment? Most really bad execution plans (missing indexes causing table scans etc.) will be obvious enough on a dev environment where you can use all the diagnostics you want.
Otherwise running the SQL on the query cache (as in the linked question someone else mentioned) will probably have the lowest impact as it just queries a system table rather than adding diagnostics to every query.
We had a SQL 2005 server running for XML EXPLICIT queries quite happily with no performance issues. The machine (a Windows 2003 server) has unfortunately died so I've had to do an emergency provision of a Windows 2012 box. The databases files have been reattached to a 2008r2 and "work". However the queries are horrendously slow. 5 seconds per query when previously they were in the .x times. This makes the websites that they power unusable.
I've rebuilt all the indexes and I've run DBCC FREEPROCCACHE on all machines but this has had no noticable effect. What else can I look at ? I can't run them on the 2016 SQL instance on the box because some of the queries use non-ANSI *= joins (I said it was old!).
If your query was running fine before, consider what else have changed - the query planner and actual execution plan might help to pinpoint this.
When you say you are joining, have you considered how much you join? If the new machine have more data in the database, then a join might quickly become prohibitively expensive. This can be done by reducing the data you need, as less datahandling means less workload.
Is there something you can pre-calculate before you run your query, or otherwise change to make it run faster?
I assume you do a SELECT, but if you UPDATE or DELETE data, the indexes also need to be recalculated, which takes a long time (in this case, disable the index, insert all the needed data and then recalculate the index)
You don't mention any XML handling, but have marked the for-xml tag. If your join is performed on XML data, using Xquery to get the data might also give a boost to performance.
So I have a summary i need to return to the end user application.
It should accept 3 parameters DateType, StartDate, EndDate.
Date Type will determine the date field I use to filter the data.
The way i accomplished this was putting all the IDs of the records for a datetype into a TEMP table and then joining my summary to the list of IDs.
This worked fine when running on the query on the SQL server that houses the data.
However, that is a replicated server, so when I compiled to a stored proc that would be on the server with the rest of the application data, it slowed the query down. IE 2 seconds vs 50 seconds.
I think the cross join from the temp table that is created on the SQL server then joining to the tables on the replciation server, is causing the slow down.
Are there any methods or techniques that I can use to get around this and build this all in one stored procedure?
If I create 3 stored procedures with their own date range, then they are fast again. However, this means maintaining multiple stored procs for the same thing.
First off, if you are running a version of SQL Server older than 2012 SP1, one problem is that users who aren't allowed to run DBCC SHOW_STATISTICS (which is most users who aren't sysadmins, see the "Permissions" section in the documentation) don't get access to statistics on remote tables. This can severely cripple the optimizer's ability to generate a good execution plan. Upgrading SQL Server or granting more permissions can help there.
If your query involves filtering or joining on a character column, make sure the remote server is flagged in the linked server options as "collation compatible". If this option is off, SQL Server can't assume strings can be compared across the servers and it will start pumping entire tables up and down just to make sure the data ends up where the comparison has to be made.
If the execution plan is as good as it gets and it's still not good enough, one general (lame) technique is to transfer all data locally first (SELECT * INTO #localtable FROM remote.db.schema.table), then run the query as a non-distributed query. Obviously, in order for this to work, the remote table cannot be "too big" and in some cases this actually has worse performance, depending on how many rows are involved. But it's always worth considering, because the optimizer does a better job with local tables.
Another approach that avoids pulling tables together across servers is packing up data in parameters to remote stored procedure calls. Entire tables can be passed as XML through an NVARCHAR(MAX), since neither XML columns nor table-valued parameters are supported in distributed queries. The basic idea is the same: avoid the need for the the optimizer to figure out an efficient distributed query. The best approach greatly depends on your data and your query, obviously.
I have a table in my SQL Server 2008 database called dbo.app_additional_info which contains approximately 130,000 records. Below shows the structure of the table.
When I run a query like the one below in SQL Server Management Studio 2008
select app_additional_text
from app_additional_info
where application_id = 2665 --Could be any ID here
My query takes a long time to execute (up to 5minutes) and sometimes it times out. This database is also connected to a Web Application and when it runs the above query, I always get a timeout error.
Is there anything I can do to speed up the performance of my query?
Your help with this would be greatly appreciated as this is grinding my web application to a halt.
Thanks.
Update
Below shows my execution plan from SSMS (I apologise for poor quality)
based on the limited info in the question, it looks like you are doing a table scan because there is no index on application_id. So, try this:
CREATE INDEX IX_app_additional_info_application_id on
app_additional_info (application_id)
your query should run much faster now.
I have two servers I'm doing development on and I'm not a DBA, but we don't have one so I'm trying to figure out some performance issues I'm having. Locally I have SQL Server 2008 R2 installed and when an ORM that I'm using runs a query it returns the results in less than a second. When I run that exact same query on our development server with is SQL Server 2005, it takes over a minute. I've looked at the execution plan on both of them the main thing that sticks out is the last two lines of the query has a order by statement. On the 2005 server this is 100% of the cost. on the 2008 server its 0% of the cost. Is there some sort of setting I'm overlooking? Both servers have approximately the same data in them and the same indexes/keys/etc.....since the local copy is just a restore from a backup.
My best guess is the 2005 server is sorting all the tables and then giving me the results (200 lines). Where the 2008 server is getting all the results and then sorting them. (200 results also.)
Link to slow execution plan: http://pastebin.com/sUCiVk8j
Link to fast execution plan: http://pastebin.com/EdR7zFAn
I would post the query but it is obnoxiously long because I have a bunch of includes and its Entity Framework that is generating the query.
Thank you in advance.
Edit: I opened Task manager on the SQL server this is running on and the CPU goes to 100% during the execution of this query.
Edit: Added XML version to jsfiddle.net. pastebin wouldn't allow me to because of the size. Just used the CSS window for the XML.
Actual 2008R2: http://jsfiddle.net/wgsv6/2/
Actual 2005: http://jsfiddle.net/wgsv6/3/
Hard to tell without seeing the query, but is it possible you are missing an INDEX on the slow server?
THe statistics could be out of date on the dev server.