What makes one of these queries faster? - sql

I have a sql query that I tried executing (below) that took 10 seconds to run, and since it was on a production environment I stopped it just to be sure there is no sql locking going on
SELECT TOP 1000000 *
FROM Table T
Where CONVERT(nvarchar(max), T.Data) like '%SearchPhrase%' --T.Data is initially XML
Now if I add an order by on creation time (which I do not believe is an index), it takes 2 seconds and is done.
SELECT TOP 1000000 *
FROM Table T
Where CONVERT(nvarchar(max), T.Data) like '%SearchPhrase%' --T.Data is initially XML
order by T.CreatedOn asc
Now the kicker is that only about 3000 rows are returned, which tells me that even with the TOP 1000000 it isn't stopping short on which rows it's still going through all the rows.
I have a basic understanding of how SQL server works and how the query parsing works, but I'm just confused as to why the order by makes it so much faster in this situation.
The server being run is SQL server 2008 R2

The additional sort operation is apparently enough in this case for SQL Server to use a parallel plan.
The slower one (without ORDER BY) is a serial plan whereas the faster one has a DegreeOfParallelism of 24 meaning that the work is being done by 24 threads rather than just a single one.
This explains the much reduced elapsed time despite the additional work required for the sort.

Related

How Can an Always-False PostgreSQL Query Run for Hours?

I've been trying to debug a slow query in another part of our system, and saw this query is active:
SELECT * FROM xdmdf.hematoma AS "zzz4" WHERE (0 = 1)
It has apparently been active for > 8 hours. With that WHERE clause, logically, this query should return zero rows. Why would a SQL engine even bother to evaluate it? Would a query like this be useful for anything, and if so, what could it be?
(xdmdf.hematoma is a view and I would expect SELECT * on it to take ~30 minutes under non-locky conditions.)
This statement:
explain select 1 from xdmdf.hematoma limit 1
(no analyze) has been running for about 10 minutes now.
There are two possibilities:
It takes forever to plan the statement, because you changed some planner settings and the view definition is so complicated (partitioned table?).
This is the unlikely explanation.
A concurrent transaction is holding an ACCESS EXCLUSIVE lock on a table involved in the view definition.
Terminate any such concurrent transactions.

SQL Azure on cloud run query slower than SQL Server on premises, what can i do to improve?

I need to retrieve a range of record which I should skip and take. However, I got result on both running on local SQL Server and SQL Azure but the time is hugh difference. Both database have the same indexes.
For example, I have a table with 7 million records and I have query like this:
SELECT TOP(100) a.Time, a.SiteID FROM (SELECT a.Time, a.SiteID, row_number() OVER (ORDER BY a.Time DESC) AS [row_number] FROM [Table] AS a WHERE a.SiteID = 1111) AS a WHERE row_number > 632900
In SQL Azure : It give result in 30 seconds to 1 mins.
In SQL Server on premises : It give result in nearly instance time.
What can I do to improve the execution time on SQL Azure?
Regards
Grace
Depending on the plan this query requires reading at least 632900 records. If there is no suitable index it might require reading and sorting the entire table.
SQL Azure is extremely memory limited. This often pushes work loads out of an in-memory state into requiring disk IO. IO is easily 100x slower than memory, especially using the severely throttled IO on Azure.
Optimize the query to require less buffer pool memory. Probably, you should create an appropriate index. Also consider using a more efficient paging strategy. For example instead of seeking by row number you could seek by the last a.Time value processed. That way the required buffer pool memory is tiny because the table access starts at just the right position.
You can try to re-write your query using OFFSET FETCH. Make sure you have an index in place that matches the columns in the ORDER BY. SQL Server will then use an optimized TOP operator to do the pagination. Check this post for some more considerations on OFFSET FETCH.

Simple Oracle query started running slow when returning more than 251 results

As the title implies, I have a very simple Oracle query that is returning in 5 seconds when I go beyond returning 251 results. I am using SQL Developer, and attaching using the built in connection utility (there is no facility for an ODBC connection in this application).
The query found here is fast (fast enough) (pa_stu holds roughly 40k rows):
Select * From pa_stu Where rownum < 252;
Oracle returns the data to me in .521 second, according to SQL Developer.
The following query, and ones that pull larger sets of data, are the culprit:
Select * From pa_stu Where rownum < 253;
Oracle returns the data to me for that last one in 5.327 second, according to SQL Developer.
All queries being used for testing have the same explain plan. That is, the filter predicate of ROWNUM<251 (change the 251 to whatever number is being used) and a TABLE ACCESS of FULL.
The results above are consistent, plus, bumping up the evaluated number to about 1000 doubles the result time to roughly 10 seconds (consistently). It is as if some buffering is going on somewhere, and that buffer is too small. Additionally, this is happening on only one of our Oracle servers. The other, more highly used one (holds different data as well) has no problem returning 100's of thousands of records using similar statements.
The databases are controlled by a DBA, and, I have run all of this by her. She does not have a solution. This actually started happening a month or so back, and was not the case many months ago, if that is meaningful. It was just not as noticeable as it is now.
Thank you for any help.

TOP 100 causing SQL Server 2008 hang?

I have inherited a VERY poorly designed and maintained database and have been using my knowledge of SQL Server and a little luck keeping this HIGH availability server up and not completing coming down in flames (the previous developer, who quit basically just kept the system up for 4 years).
I have come across a very strange problem today. I hope someone can explain this to me so if this happens again there is a way to fix it.
Anyway, there is a stored proc that is pretty simple. It joins two tables together between a SHORT date/time range (5 mins range) and passes back the results (this query runs every 5 mins via a windows service). The largest table has 100k rows, the smallest table has 10k rows. The stored proc is very simple and does:
NOTE:The table and columns names have been changed to protect the innocent.
SELECT TOP 100 m.*
FROM dbo.mytable1 m WITH (nolock)
INNER JOIN dbo.mytable2 s WITH (nolock) ON m.Table2ID = s.Table2ID
WHERE m.RowActive = 1
AND s.DateStarted <= DATEADD(minute, -5, getdate())
ORDER BY m.DateStarted
Now, if I keep "TOP 100" in the query, the query hangs until I stop it (running in SMS or in the stored proc). If I remove the TOP 100, the query works as planned and returns 50-ish rows, like it should (we don't want it to return more than 100 rows if we can help it).
So, I did some investigating, using sp_who, sp_who2, and looked at the master..sysprocesses and used DBCC INPUTBUFFER to look for any SPIDs that might be locking or blocking. No blocks and no locking.
This JUST STARTED today with no changes to these these two tables designs and from what I gather the last time this query/tables have been touched was 3 years ago and has been running without error since.
Now, a side note, and I don't know if this would have anything to do with this. But I reindexed both these tables about 24 hours before because they were 99% fragmented (remember, I said this was poorly designed and poorly maintained server).
Can anyone explain why SQL Server 2008 would do this?
THE ORDER BY is the killer. it has to read all rows, sort by that order by column, and then give you the first 100 rows.
The absolute first thing I would do would do a side by side comparison of the query plans of the full and the top 100 queries and see if the top 100 is not performant. You might need to update stats or even have missing indexes.
I'd presume there's no index on mytable1.DateStarted. I think something might be deciding to perform the sorting earlier on in the query process when you did SELECT TOP 100.

Count(*) vs Count(1) - SQL Server

Just wondering if any of you people use Count(1) over Count(*) and if there is a noticeable difference in performance or if this is just a legacy habit that has been brought forward from days gone past?
The specific database is SQL Server 2005.
There is no difference.
Reason:
Books on-line says "COUNT ( { [ [ ALL | DISTINCT ] expression ] | * } )"
"1" is a non-null expression: so it's the same as COUNT(*).
The optimizer recognizes it for what it is: trivial.
The same as EXISTS (SELECT * ... or EXISTS (SELECT 1 ...
Example:
SELECT COUNT(1) FROM dbo.tab800krows
SELECT COUNT(1),FKID FROM dbo.tab800krows GROUP BY FKID
SELECT COUNT(*) FROM dbo.tab800krows
SELECT COUNT(*),FKID FROM dbo.tab800krows GROUP BY FKID
Same IO, same plan, the works
Edit, Aug 2011
Similar question on DBA.SE.
Edit, Dec 2011
COUNT(*) is mentioned specifically in ANSI-92 (look for "Scalar expressions 125")
Case:
a) If COUNT(*) is specified, then the result is the cardinality of T.
That is, the ANSI standard recognizes it as bleeding obvious what you mean. COUNT(1) has been optimized out by RDBMS vendors because of this superstition. Otherwise it would be evaluated as per ANSI
b) Otherwise, let TX be the single-column table that is the
result of applying the <value expression> to each row of T
and eliminating null values. If one or more null values are
eliminated, then a completion condition is raised: warning-
In SQL Server, these statements yield the same plans.
Contrary to the popular opinion, in Oracle they do too.
SYS_GUID() in Oracle is quite computation intensive function.
In my test database, t_even is a table with 1,000,000 rows
This query:
SELECT COUNT(SYS_GUID())
FROM t_even
runs for 48 seconds, since the function needs to evaluate each SYS_GUID() returned to make sure it's not a NULL.
However, this query:
SELECT COUNT(*)
FROM (
SELECT SYS_GUID()
FROM t_even
)
runs for but 2 seconds, since it doen't even try to evaluate SYS_GUID() (despite * being argument to COUNT(*))
I work on the SQL Server team and I can hopefully clarify a few points in this thread (I had not seen it previously, so I am sorry the engineering team has not done so previously).
First, there is no semantic difference between select count(1) from table vs. select count(*) from table. They return the same results in all cases (and it is a bug if not). As noted in the other answers, select count(column) from table is semantically different and does not always return the same results as count(*).
Second, with respect to performance, there are two aspects that would matter in SQL Server (and SQL Azure): compilation-time work and execution-time work. The Compilation time work is a trivially small amount of extra work in the current implementation. There is an expansion of the * to all columns in some cases followed by a reduction back to 1 column being output due to how some of the internal operations work in binding and optimization. I doubt it would show up in any measurable test, and it would likely get lost in the noise of all the other things that happen under the covers (such as auto-stats, xevent sessions, query store overhead, triggers, etc.). It is maybe a few thousand extra CPU instructions. So, count(1) does a tiny bit less work during compilation (which will usually happen once and the plan is cached across multiple subsequent executions). For execution time, assuming the plans are the same there should be no measurable difference. (One of the earlier examples shows a difference - it is most likely due to other factors on the machine if the plan is the same).
As to how the plan can potentially be different. These are extremely unlikely to happen, but it is potentially possible in the architecture of the current optimizer. SQL Server's optimizer works as a search program (think: computer program playing chess searching through various alternatives for different parts of the query and costing out the alternatives to find the cheapest plan in reasonable time). This search has a few limits on how it operates to keep query compilation finishing in reasonable time. For queries beyond the most trivial, there are phases of the search and they deal with tranches of queries based on how costly the optimizer thinks the query is to potentially execute. There are 3 main search phases, and each phase can run more aggressive(expensive) heuristics trying to find a cheaper plan than any prior solution. Ultimately, there is a decision process at the end of each phase that tries to determine whether it should return the plan it found so far or should it keep searching. This process uses the total time taken so far vs. the estimated cost of the best plan found so far. So, on different machines with different speeds of CPUs it is possible (albeit rare) to get different plans due to timing out in an earlier phase with a plan vs. continuing into the next search phase. There are also a few similar scenarios related to timing out of the last phase and potentially running out of memory on very, very expensive queries that consume all the memory on the machine (not usually a problem on 64-bit but it was a larger concern back on 32-bit servers). Ultimately, if you get a different plan the performance at runtime would differ. I don't think it is remotely likely that the difference in compilation time would EVER lead to any of these conditions happening.
Net-net: Please use whichever of the two you want as none of this matters in any practical form. (There are far, far larger factors that impact performance in SQL beyond this topic, honestly).
I hope this helps. I did write a book chapter about how the optimizer works but I don't know if its appropriate to post it here (as I get tiny royalties from it still I believe). So, instead of posting that I'll post a link to a talk I gave at SQLBits in the UK about how the optimizer works at a high level so you can see the different main phases of the search in a bit more detail if you want to learn about that. Here's the video link: https://sqlbits.com/Sessions/Event6/inside_the_sql_server_query_optimizer
Clearly, COUNT(*) and COUNT(1) will always return the same result. Therefore, if one were slower than the other it would effectively be due to an optimiser bug. Since both forms are used very frequently in queries, it would make no sense for a DBMS to allow such a bug to remain unfixed. Hence you will find that the performance of both forms is (probably) identical in all major SQL DBMSs.
In the SQL-92 Standard, COUNT(*) specifically means "the cardinality of the table expression" (could be a base table, `VIEW, derived table, CTE, etc).
I guess the idea was that COUNT(*) is easy to parse. Using any other expression requires the parser to ensure it doesn't reference any columns (COUNT('a') where a is a literal and COUNT(a) where a is a column can yield different results).
In the same vein, COUNT(*) can be easily picked out by a human coder familiar with the SQL Standards, a useful skill when working with more than one vendor's SQL offering.
Also, in the special case SELECT COUNT(*) FROM MyPersistedTable;, the thinking is the DBMS is likely to hold statistics for the cardinality of the table.
Therefore, because COUNT(1) and COUNT(*) are semantically equivalent, I use COUNT(*).
COUNT(*) and COUNT(1) are same in case of result and performance.
I would expect the optimiser to ensure there is no real difference outside weird edge cases.
As with anything, the only real way to tell is to measure your specific cases.
That said, I've always used COUNT(*).
As this question comes up again and again, here is one more answer. I hope to add something for beginners wondering about "best practice" here.
SELECT COUNT(*) FROM something counts records which is an easy task.
SELECT COUNT(1) FROM something retrieves a 1 per record and than counts the 1s that are not null, which is essentially counting records, only more complicated.
Having said this: Good dbms notice that the second statement will result in the same count as the first statement and re-interprete it accordingly, as not to do unnecessary work. So usually both statements will result in the same execution plan and take the same amount of time.
However from the point of readability you should use the first statement. You want to count records, so count records, not expressions. Use COUNT(expression) only when you want to count non-null occurences of something.
I ran a quick test on SQL Server 2012 on an 8 GB RAM hyper-v box. You can see the results for yourself. I was not running any other windowed application apart from SQL Server Management Studio while running these tests.
My table schema:
CREATE TABLE [dbo].[employee](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_employee] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Total number of records in Employee table: 178090131 (~ 178 million rows)
First Query:
Set Statistics Time On
Go
Select Count(*) From Employee
Go
Set Statistics Time Off
Go
Result of First Query:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 35 ms.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 10766 ms, elapsed time = 70265 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Second Query:
Set Statistics Time On
Go
Select Count(1) From Employee
Go
Set Statistics Time Off
Go
Result of Second Query:
SQL Server parse and compile time:
CPU time = 14 ms, elapsed time = 14 ms.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 11031 ms, elapsed time = 70182 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
You can notice there is a difference of 83 (= 70265 - 70182) milliseconds which can easily be attributed to exact system condition at the time queries are run. Also I did a single run, so this difference will become more accurate if I do several runs and do some averaging. If for such a huge data-set the difference is coming less than 100 milliseconds, then we can easily conclude that the two queries do not have any performance difference exhibited by the SQL Server Engine.
Note : RAM hits close to 100% usage in both the runs. I restarted SQL Server service before starting both the runs.
SET STATISTICS TIME ON
select count(1) from MyTable (nolock) -- table containing 1 million records.
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 36 ms.
select count(*) from MyTable (nolock) -- table containing 1 million records.
SQL Server Execution Times:
CPU time = 46 ms, elapsed time = 37 ms.
I've ran this hundreds of times, clearing cache every time.. The results vary from time to time as server load varies, but almost always count(*) has higher cpu time.
There is an article showing that the COUNT(1) on Oracle is just an alias to COUNT(*), with a proof about that.
I will quote some parts:
There is a part of the database software that is called “The
Optimizer”, which is defined in the official documentation as
“Built-in database software that determines the most efficient way to
execute a SQL statement“.
One of the components of the optimizer is called “the transformer”,
whose role is to determine whether it is advantageous to rewrite the
original SQL statement into a semantically equivalent SQL statement
that could be more efficient.
Would you like to see what the optimizer does when you write a query
using COUNT(1)?
With a user with ALTER SESSION privilege, you can put a tracefile_identifier, enable the optimizer tracing and run the COUNT(1) select, like: SELECT /* test-1 */ COUNT(1) FROM employees;.
After that, you need to localize the trace files, what can be done with SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';. Later on the file, you will find:
SELECT COUNT(*) “COUNT(1)” FROM “COURSE”.”EMPLOYEES” “EMPLOYEES”
As you can see, it's just an alias for COUNT(*).
Another important comment: the COUNT(*) was really faster two decades ago on Oracle, before Oracle 7.3:
Count(1) has been rewritten in count(*) since 7.3 because Oracle like
to Auto-tune mythic statements. In earlier Oracle7, oracle had to
evaluate (1) for each row, as a function, before DETERMINISTIC and
NON-DETERMINISTIC exist.
So two decades ago, count(*) was faster
For another databases as Sql Server, it should be researched individually for each one.
I know that this question is specific for SQL Server, but the other questions on SO about the same subject (without mention a specific database) were closed and marked as duplicated from this answer.
In all RDBMS, the two ways of counting are equivalent in terms of what result they produce. Regarding performance, I have not observed any performance difference in SQL Server, but it may be worth pointing out that some RDBMS, e.g. PostgreSQL 11, have less optimal implementations for COUNT(1) as they check for the argument expression's nullability as can be seen in this post.
I've found a 10% performance difference for 1M rows when running:
-- Faster
SELECT COUNT(*) FROM t;
-- 10% slower
SELECT COUNT(1) FROM t;
COUNT(1) is not substantially different from COUNT(*), if at all. As to the question of COUNTing NULLable COLUMNs, this can be straightforward to demo the differences between COUNT(*) and COUNT(<some col>)--
USE tempdb;
GO
IF OBJECT_ID( N'dbo.Blitzen', N'U') IS NOT NULL DROP TABLE dbo.Blitzen;
GO
CREATE TABLE dbo.Blitzen (ID INT NULL, Somelala CHAR(1) NULL);
INSERT dbo.Blitzen SELECT 1, 'A';
INSERT dbo.Blitzen SELECT NULL, NULL;
INSERT dbo.Blitzen SELECT NULL, 'A';
INSERT dbo.Blitzen SELECT 1, NULL;
SELECT COUNT(*), COUNT(1), COUNT(ID), COUNT(Somelala) FROM dbo.Blitzen;
GO
DROP TABLE dbo.Blitzen;
GO