I have a MS SQL table with about 8 million records. There is a primary key (with clustered index with only 0.8% fragmentation) on the column "ID". When I run seemingly any query referencing the ID column, the query takes very long (and in fact ultimately crashes my application). This includes simple queries like "SELECT * FROM table WHERE ID=2020". By contrast, queries that do not reference ID (such as "SELECT TOP 100 * from table") are just fine.
Any ideas?
I'd check statistics if you've already checked fragmentation
Have they been disabled or not updated?
They quick way to check is to use STATS_DATE
If the query is taking 10 minutes(!?!) you've got something seriously wrong. Even a table scan of 8 million records should take only a second or two. I would check the event log for an indications of imminent hardware failure, or try moving the database to a different server to see if there's some other hardware fault.
I'd suspect that the query is doing a table scan (or, at least, an index scan on a really large or statistics-out-of-date index). Generate an estimated execution plan (Control-L in SSMS) or have SQL Server return the execution plan it actually used because it's sometimes different (Control-M to enable it, then run your query normally - it will create a new tab next to your results).
Once you have the execution plan, search for a table scan or an index scan, and that is most likely the source of your slowness. The "estimated execution plan" may even recommend and index to help the query return more quickly - newer versions of SQL Server/SSMS include this feature.
Though I suspect you won't find anything interesting - your query is just a single step - here's a quick intro on reading execution plans: http://sqlserverpedia.com/wiki/Examining_Query_Execution_Plans
Is ID a GUID by any chance? SQL Server has an issue with the hash generation on GUID columns which makes them perform badly in indexes.
Take your exact query and grab the "estimated execution plan" from Sql Server Management Studio. If you query is triggering a table scan (which it shouldn't be) then this would explain the timing. Perhaps the plan might show you what else is happening.
I assume that you have already rebuilt the index (based on your 0.8% fragmentation comment).
I guess if you are using the default read level and you have a long running update you may be hitting a deadlock as well? That could definitely cause this too.
Related
A simple SQL Select is query running forever for a particular ID in SQL Server 2012.
This query is running forever; it should return 10000 rows:
select *
from employees
where company_id = 34
If I change the query to
select *
from employees
where company_id = 12
it returns 7000 rows very quickly.
Employees is a view created by joining different tables.
Could there be a problem in the view?
One possibility is that you have a very large table. Such a query is probably scanning the entire tables and returning rows that match as they are encountered.
My guess is that rows for company 12 are encountered before rows for company 34.
If this is the case, then an index on (company_id) should help.
There may be other causes as well. Here are two other possibilities:
Contention for rows with company_id 34 that are causing delays on reading the data (this would depend on isolation level that you are using and the nature of concurrent updates).
An unlimited size column which is populated with very big values for company_id 34 and empty or very small for 12.
There may be other possibilities as well.
One of the things you can do to speed up the process is to index the column on company_id as a b-tree index would speed up the search.
Without looking at the structure of the table and execution plan, here are a few things that can be suggested apart from what Gordon has already covered:
Could you create indexes on the underlying tables which can cover this query? That would include index on the 'searched' and 'sorted' columns (joins, where clause, order by, group by, distinct) and include the SELECTED columns in the INCLUDE part of the indexes (in case of a nonclustered rowstore index)? Aim is to see 'index seek' in the Execution Plan.
Update statistics on the underlying tables. (And as a side note, would suggest to keep 'AUTO CREATE' and 'AUTO UPDATE' statistics ON unless you have a reason not to do that automatically in your application)
Would also like to know when was the last time defragmentation was performed on the server. Long due defragmentation could be a very good reason for why you might see this kind of issues on certain values, specially on a table on which lot of write operations happen.
Execute the query again. Even if you do not have proper information about #3 above, you can try to execute the query skipping step#3.
While running the query check for waits stats in in the server by
querying at dmvs: sys.dm_os_wait_stats and sys.dm_tran_locks. Please
check whether the wait is due to CXPACKET (waits due to other parallel
processes) or PAGEIOLATCH (Reading from Disk than RAM) or locks. It
is the starting point of the investigation which will give you the
root cause and you can then take appropriate measure accordingly.
Additional quick check can be: checking 'Available RAM' in the
server task manager. Please make sure that your SQL Server RAM is not used up by
some other unnecessary applications/sessions.
I have two SQL Server Azure instances with Standard S2: 50 DTUs. When I run simple select statements on two instances, one of them takes more time than other or times out. Slower one have more records in tables in slower instance.
Both the instances have same table schema. Number of records in tables present in slower instances, LogEvidence table have 1324928 and LogItem table have 649391. Number of records in tables present in faster instances, LogEvidence table have 89504 and LogItem table have 89496.
Below is the simple select statement
select count(*) from logitem
Above simple select statement takes 0s on faster instance and on slower instance it takes 138s. And if I execute any stored procedure, slower instance takes more times or times out.
Time taken by both instances should be almost same.
Those simple queries perform big scans on the table and involve reading all rows. If the table has a clustered index you don't have to perform a SELECT COUNT(*) to know the number of records the table has. The following query should to that faster:
SELECT OBJECT_NAME(ps.object_id) , i.name , row_count
FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS i
ON ps.index_id = i.index_id AND ps.object_id = i.object_id
WHERE i.name like '%logitem%'
If the table does not have an Id please add an autoid on the table and make it the clustered index.
You can also try to add a useless WHERE clause like below to the query, and you may get a better performance.
SELECT count(*)
FROM logitem
WHERE id > 0
Where Id is the autoid column.
I had some experience with azure, and from your description I think there is one of following things you can do:
Since you are using only count, then indexes play no role. Though I understand other answer says to use where id>0, but azure should count 1M rows without 30 second timeout. But for other queries you need Indexes, or Azure will fail.
Check if your server is not under maintenance, it is low chance but it does happen with us, we are on s4 and occasionally our server just get slow, but after 10-30 minute it works fine. Maybe the actual hardware get in some process that slows it down.
This is most important reason for slow execution, especially if you have lot of write and delete happen on your server. Check the database size. Azure database got fragmented too quickly, we have to optimize it's data fragmentation every 10 days, if your bacpac size is 100MB and your database size in Azure shows like 5-6 GB, then it definitely need optimization as lot of fragments were generated. MSDN has given some queries to recreate indexes and remove fragmentation, I don't remember them URL, but simple google search will bring that. It should speed things up.
Azure has feature that auto generate indexes, check if both table share same indexes, maybe your faster version has some index Azure create by itself.
You should step back and ponder your assumption:
1. "performance should be about the same" - you have more data in one case vs. the other. In the limit, you should expect the performance of the second one to potentially be somewhat slower than the original one.
Now, let's go into the "why" it can be slower and how you can investigate each case:
Step 1: Look at the query plans for each case and see what you have. Likely, you will have something like:
StreamAgg <- Clustered Index Scan
(if you have other b-tree indexes, you might scan one of them and it might be faster since the index would not be as wide and thus the index will have fewer pages to scan)
Step 2: You can look at the actual execution times and resource use for each query to see why they are different. One way to do this is to run "set statistics time on", then "set statistics io on", then run your query. it will dump out extra information into SSMS when you run the query from there. (You can read about this here: https://learn.microsoft.com/en-us/sql/t-sql/statements/set-statistics-io-transact-sql?view=sql-server-2017)
If you review the output from each one, you may find reasons for why the performance is different. One possible explanation is that the amount of memory is limited in an S2 and you are just at the boundary for where all the pages fit in memory vs. not for the two examples. In that case, doing a count(*) query would need to cycle through all the pages and do much more IO than in the smaller case where they might all be in memory already.
Step 3: You can also potentially examine the query store to get insight into why one case is fast and one case is not. An overview of how to use it is here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
Note: it is on-by-default in SQL Azure so you can just go look at the time window when you ran the queries to get insight into what was happening at that time in your database.
Finally, you might consider ways to make the query go faster if you need it to be faster.
* creating a narrow b-tree index on the table may help for that one query (count(*) doesn't return any columns and just needs a count of rows from some non-filtered index).
* you could use a Columnstore (which requires an S3 or above for memory reasons). This kind of column-oriented index is optimized for this kind of query and would be much faster as the size of the table increases in the future.
Hope that help
One of my projects has a very large database on which I can't edit indexes etc., have to work as it is.
What I saw when testing some queries that I will be running on their database via a service that I am writing in .net. Is that they are quite slow when ran the first time?
What they used to do before is - they have 2 main (large) tables that are used mostly. They showed me that they open SQL Server Management Studio and run a
SELECT *
FROM table1
JOIN table2
a query that takes around 5 minutes to run the first time, but then takes about 30 seconds if you run it again without closing SQL Server Management Studio. What they do is they keep open SQL Server Management Studio 24/7 so that when one of their programs executes queries that are related to these 2 tables (which seems to be almost all queries ran by their program) in order to have the 30 seconds run time instead of the 5 minutes.
This happens because I assume the 2 tables get cached and then there are no (or close to none) disk reads.
Is this a good idea to have a service which then runs a query to cache these 2 tables every now and then? Or is there a better solution to this, given the fact that I can't edit indexes or split the tables, etc.?
Edit:
Sorry just I was possibly unclear, the DB hopefully has indexes already, just I am not allowed to edit them or anything.
Edit 2:
Query plan
This could be a candidate for an indexed view (if you can persuade your DBA to create it!), something like:
CREATE VIEW transhead_transdata
WITH SCHEMABINDING
AS
SELECT
<columns of interest>
FROM
transhead th
JOIN transdata td
ON th.GID = td.HeadGID;
GO
CREATE UNIQUE CLUSTERED INDEX transjoined_uci ON transhead_transdata (<something unique>);
This will "precompute" the JOIN (and keep it in sync as transhead and transdata change).
You can't create indexes? This is your biggest problem regarding performance. A better solution would be to create the proper indexes and address any performance by checking wait stats, resource contention, etc... I'd start with Brent Ozar's blog and open source tools, and move forward from there.
Keeping SSMS open doesn't prevent the plan cache from being cleared. I would start with a few links.
Understanding the query plan cache
Check your current plan cache
Understanding why the cache would clear (memory constraint, too many plans (can't hold them all), Index Rebuild operation, etc. Brent talks about this in this answer
How to clear it manually
Aside from that... that query is suspect. I wouldn't expect your application to use those results. That is, I wouldn't expect you to load every row and column from two tables into your application every time it was called. Understand that a different query on those same tables, like selecting less columns, adding a predicate, etc could and likely would cause SQL Server to generate a new query plan that was more optimized. The current query, without predicates and selecting every column... and no indexes as you stated, would simply do two table scans. Any increase in performance going forward wouldn't be because the plan was cached, but because the data was stored in memory and subsequent reads wouldn't experience physical reads. i.e. it is reading from memory versus disk.
There's a lot more that could be said, but I'll stop here.
You might also consider putting this query into a stored procedure which can then be scheduled to run at a regular interval through SQL Agent that will keep the required pages cached.
Thanks to both #scsimon #Branko Dimitrijevic for their answers I think they were really useful and the one that guided me in the right direction.
In the end it turns out that the 2 biggest issues were hardware resources (RAM, no SSD), and Auto Close feature that was set to True.
Other fixes that I have made (writing it here for anyone else that tries to improve):
A helper service tool will rearrange(defragment) indexes once every
week and will rebuild them once a month.
Create a view which has all the columns from the 2 tables in question - to eliminate JOIN cost.
Advised that a DBA can probably help with better tables/indexes
Advised to improve server hardware...
Will accept #Branko Dimitrijevic 's answer as I can't accept both
Tracking indexes and analyzing the tables on which index add, we encounter some situations:
some of our tables have index, but when I execute a query with a clause where on index field, doesn't account in your idx_scan field respective. Same relname and schemaname, so, I couldn't be wrong.
Testing more, I deleted and create the table again, after that the query returned to account the idx_scan.
That occurred with another tables too, we executed some queries with indexes and didn't account idx_scan field, only in seq_scan and even if I create another field in the same table with index, this new field doesn't count idx_scan.
Whats the problem with these tables? What do we do wrong? Only if I create a new table with indexes that account in idx_scan, just in an old table that has wrong.
We did migration sometimes with this database, maybe it can be the problem? Happened on localhost and server online.
Another event that we saw, some indexes were accounted, idx_scan > 0, and when execute query select, does not increase idx_scan again, the number was fixed and just increase seq_scan.
I believe those problems can be related.
I appreciate some help, it's a big mystery prowling our DB and have no idea what the problem can be.
A couple suggestions (and what to add to your question).
The first is that index scans are not always favored to to sequential scans. For example, if your table is small or the planner estimates that most pages will need to be fetched, an index scan will be omitted in favor of a sequential scan.
Remember: no plan beats retrieving a single page off disk and sequentially running through it.
Similarly if you have to retrieve, say, 50% of the pages of a relation, doing an index scan is going to trade somewhat less disk/IO total for a great deal more random disk/IO. It might be a win if you use SSD's but certainly not with conventional hard drives. After all you don't really want to be waiting for platters to turn. If you are using SSD's you can tweak planner settings accordingly.
So index vs sequential scan is not the end of the story. The question is how many rows are retrieved, how big the tables are, what percentage of disk pages are retrieved, etc.
If it really is picking a bad plan (rather than a good plan that you didn't consider!) then the question becomes why. There are ways of setting statistics targets but these may not be really helpful.
Finally the planner really can't choose an index in some cases where you might like it to. For example, suppose I have a 10 million row table with records spanning 5 years (approx 2 million rows per year on average). I would like to get the distinct years. I can't do this with a standard query and index, but I can build a WITH RECURSIVE CTE to essentially execute the same query once for each year and that will use an index. Of course you had better have an index in that case or WITH RECURSIVE will do a sequential scan for each year which is certainly not what you want!
tl;dr: It's complicated. You want to make sure this is really a bad plan before jumping to conclusions and then if it is a bad plan see what you can do about it depending on your configuration.
I currently working with a larger wikipedia-dump derived PostgreSQL database; it contains about 40 GB of data. The database is running on an HP Proliant ML370 G5 server with Suse Linux Enterprise Server 10; I am querying it from my laptop over a private network managed by a simple D-Link router. I assigned static DHCP (private) IPs to both laptop and server.
Anyway, from my laptop, using pgAdmin III, I send off some SQL commands/queries; some of these are CREATE INDEX, DROP INDEX, DELETE, SELECT, etc. Sometimes I send a command (like CREATE INDEX), it returns, telling me that the query was executed perfectly, etc. However, the postmaster process assigned to such a command seems to remain sleeping on the server. Now, I do not really mind this, for I say to myself that PostgreSQL maintains a pool of postmasters ready to process queries. Yet, if this process eats up 6 GB of it 9.4 GB assigned RAM, I worry (and it does so for the moment). Now maybe this is a cache of data that is kept in [shared] memory in case another query happens to need to use that same data, but I do not know.
Another thing is bothering me.
I have 2 tables. One is the page table; I have an index on its page_id column. The other is the pagelinks tables which has the pl_from column that references either nothing or a variable in the page.page_id column; unlike the page_id column, the pl_from has no index (yet). To give you an idea of the scale of the tables and the necessity for me to find a viable solution, page table has 13.4 million rows (after I deleted those I do not need) while the pagelinks table has 293 million.
I need to execute the following command to clean the pagelinks table of some of its useless rows:
DELETE FROM pagelinks USING page WHERE pl_from NOT IN (page_id);
So basically, I wish to rid the pagelinks table of all links coming from a page not in the page table. Even after disabling the nested loops and/or sequential scans, the query optimizer always gives me the following "solution":
Nested Loop (cost=494640.60..112115531252189.59 rows=3953377028232000 width=6)
Join Filter: ("outer".pl_from <> "inner".page_id)"
-> Seq Scan on pagelinks (cost=0.00..5889791.00 rows=293392800 width=17)
-> Materialize (cost=494640.60..708341.51 rows=13474691 width=11)
-> Seq Scan on page (cost=0.00..402211.91 rows=13474691 width=11)
It seems that such a task would take more than weeks to complete; obviously, this is unacceptable. It seems to me that I would much rather it use the page_id index to do its thing...but it is a stubborn optimizer and I might be wrong.
To your second question; you could try creating a new table with just the records you need with a CREATE TABLE AS statement; if the new table is sufficiently small, it might be faster- but it might not help either.
Indeed, I decided to CREATE a temporary table to speed up query execution:
CREATE TABLE temp_to_delete AS(
(SELECT DISTINCT pl_from FROM pagelinks)
EXCEPT
(SELECT page_id FROM page));
DELETE FROM pagelinks USING temp_to_delete
WHERE pagelinks.pl_from IN (temp_to_delete.pl_from);
Surprisingly, this query completed in about 4 hours while the initial query had remained active for about 14hrs before I decided to kill it. More specifically, the DELETE returned:
Query returned successfully: 31340904 rows affected, 4415166 ms execution time.
As for the first part of my question, it seems that the postmaster process indeed keeps some info in cache; when another query requires info not in the cache and some memory (RAM), the cache is emptied. And the postmasters are indeed but a pool of process'.
It has also occurred to me that the gnome-system-monitor is a myth for it gives incomplete information and is worthless in informational value. It is mostly due to this application that I have been so confused lately; for example, it does not consider the memory usage of other users (like the postgres user!) and even tells me that I have 12 GB of RAM left when this is so untrue. Hence, I tried out a couple of system monitors for I like to know how postgreSQL is using its resources, and it seems that xosview is indeed a valid tool.
Hope this helps!
Your postmaster process will stay there as long as the connection to the client is open.
Does pgadmin close the connection ? I don't know.
Memory used could be shared_buffers (check your config settings) or not.
Now, the query. For big maintenance operations like this, feel free to set work_mem to something large like a few GB. You look like you got lots of RAM, so use it.
set work_mem to '4GB';
EXPLAIN DELETE FROM pagelinks WHERE pl_from NOT IN (SELECT page_id FROM page);
It should seq scan page, hash it, and seq scan pagelinks, peeking in the hash to check for page_ids. It should be quite fast (much faster than 4 hours !) but you need a large work_mem for the hash.
But since you delete a significant portion of your table, it might be faster to do it like this :
CREATE TABLE pagelinks2 AS SELECT a.* FROM pagelinks a JOIN pages b ON a.pl_from = b.page_id;
(you could use a simple JOIN instead of IN)
You can also add an ORDER BY on this query, and your new table will be nicely ordered on disk for optimal access later.