Count(*) vs Count(1) - SQL Server - sql

Just wondering if any of you people use Count(1) over Count(*) and if there is a noticeable difference in performance or if this is just a legacy habit that has been brought forward from days gone past?
The specific database is SQL Server 2005.

There is no difference.
Reason:
Books on-line says "COUNT ( { [ [ ALL | DISTINCT ] expression ] | * } )"
"1" is a non-null expression: so it's the same as COUNT(*).
The optimizer recognizes it for what it is: trivial.
The same as EXISTS (SELECT * ... or EXISTS (SELECT 1 ...
Example:
SELECT COUNT(1) FROM dbo.tab800krows
SELECT COUNT(1),FKID FROM dbo.tab800krows GROUP BY FKID
SELECT COUNT(*) FROM dbo.tab800krows
SELECT COUNT(*),FKID FROM dbo.tab800krows GROUP BY FKID
Same IO, same plan, the works
Edit, Aug 2011
Similar question on DBA.SE.
Edit, Dec 2011
COUNT(*) is mentioned specifically in ANSI-92 (look for "Scalar expressions 125")
Case:
a) If COUNT(*) is specified, then the result is the cardinality of T.
That is, the ANSI standard recognizes it as bleeding obvious what you mean. COUNT(1) has been optimized out by RDBMS vendors because of this superstition. Otherwise it would be evaluated as per ANSI
b) Otherwise, let TX be the single-column table that is the
result of applying the <value expression> to each row of T
and eliminating null values. If one or more null values are
eliminated, then a completion condition is raised: warning-

In SQL Server, these statements yield the same plans.
Contrary to the popular opinion, in Oracle they do too.
SYS_GUID() in Oracle is quite computation intensive function.
In my test database, t_even is a table with 1,000,000 rows
This query:
SELECT COUNT(SYS_GUID())
FROM t_even
runs for 48 seconds, since the function needs to evaluate each SYS_GUID() returned to make sure it's not a NULL.
However, this query:
SELECT COUNT(*)
FROM (
SELECT SYS_GUID()
FROM t_even
)
runs for but 2 seconds, since it doen't even try to evaluate SYS_GUID() (despite * being argument to COUNT(*))

I work on the SQL Server team and I can hopefully clarify a few points in this thread (I had not seen it previously, so I am sorry the engineering team has not done so previously).
First, there is no semantic difference between select count(1) from table vs. select count(*) from table. They return the same results in all cases (and it is a bug if not). As noted in the other answers, select count(column) from table is semantically different and does not always return the same results as count(*).
Second, with respect to performance, there are two aspects that would matter in SQL Server (and SQL Azure): compilation-time work and execution-time work. The Compilation time work is a trivially small amount of extra work in the current implementation. There is an expansion of the * to all columns in some cases followed by a reduction back to 1 column being output due to how some of the internal operations work in binding and optimization. I doubt it would show up in any measurable test, and it would likely get lost in the noise of all the other things that happen under the covers (such as auto-stats, xevent sessions, query store overhead, triggers, etc.). It is maybe a few thousand extra CPU instructions. So, count(1) does a tiny bit less work during compilation (which will usually happen once and the plan is cached across multiple subsequent executions). For execution time, assuming the plans are the same there should be no measurable difference. (One of the earlier examples shows a difference - it is most likely due to other factors on the machine if the plan is the same).
As to how the plan can potentially be different. These are extremely unlikely to happen, but it is potentially possible in the architecture of the current optimizer. SQL Server's optimizer works as a search program (think: computer program playing chess searching through various alternatives for different parts of the query and costing out the alternatives to find the cheapest plan in reasonable time). This search has a few limits on how it operates to keep query compilation finishing in reasonable time. For queries beyond the most trivial, there are phases of the search and they deal with tranches of queries based on how costly the optimizer thinks the query is to potentially execute. There are 3 main search phases, and each phase can run more aggressive(expensive) heuristics trying to find a cheaper plan than any prior solution. Ultimately, there is a decision process at the end of each phase that tries to determine whether it should return the plan it found so far or should it keep searching. This process uses the total time taken so far vs. the estimated cost of the best plan found so far. So, on different machines with different speeds of CPUs it is possible (albeit rare) to get different plans due to timing out in an earlier phase with a plan vs. continuing into the next search phase. There are also a few similar scenarios related to timing out of the last phase and potentially running out of memory on very, very expensive queries that consume all the memory on the machine (not usually a problem on 64-bit but it was a larger concern back on 32-bit servers). Ultimately, if you get a different plan the performance at runtime would differ. I don't think it is remotely likely that the difference in compilation time would EVER lead to any of these conditions happening.
Net-net: Please use whichever of the two you want as none of this matters in any practical form. (There are far, far larger factors that impact performance in SQL beyond this topic, honestly).
I hope this helps. I did write a book chapter about how the optimizer works but I don't know if its appropriate to post it here (as I get tiny royalties from it still I believe). So, instead of posting that I'll post a link to a talk I gave at SQLBits in the UK about how the optimizer works at a high level so you can see the different main phases of the search in a bit more detail if you want to learn about that. Here's the video link: https://sqlbits.com/Sessions/Event6/inside_the_sql_server_query_optimizer

Clearly, COUNT(*) and COUNT(1) will always return the same result. Therefore, if one were slower than the other it would effectively be due to an optimiser bug. Since both forms are used very frequently in queries, it would make no sense for a DBMS to allow such a bug to remain unfixed. Hence you will find that the performance of both forms is (probably) identical in all major SQL DBMSs.

In the SQL-92 Standard, COUNT(*) specifically means "the cardinality of the table expression" (could be a base table, `VIEW, derived table, CTE, etc).
I guess the idea was that COUNT(*) is easy to parse. Using any other expression requires the parser to ensure it doesn't reference any columns (COUNT('a') where a is a literal and COUNT(a) where a is a column can yield different results).
In the same vein, COUNT(*) can be easily picked out by a human coder familiar with the SQL Standards, a useful skill when working with more than one vendor's SQL offering.
Also, in the special case SELECT COUNT(*) FROM MyPersistedTable;, the thinking is the DBMS is likely to hold statistics for the cardinality of the table.
Therefore, because COUNT(1) and COUNT(*) are semantically equivalent, I use COUNT(*).

COUNT(*) and COUNT(1) are same in case of result and performance.

I would expect the optimiser to ensure there is no real difference outside weird edge cases.
As with anything, the only real way to tell is to measure your specific cases.
That said, I've always used COUNT(*).

As this question comes up again and again, here is one more answer. I hope to add something for beginners wondering about "best practice" here.
SELECT COUNT(*) FROM something counts records which is an easy task.
SELECT COUNT(1) FROM something retrieves a 1 per record and than counts the 1s that are not null, which is essentially counting records, only more complicated.
Having said this: Good dbms notice that the second statement will result in the same count as the first statement and re-interprete it accordingly, as not to do unnecessary work. So usually both statements will result in the same execution plan and take the same amount of time.
However from the point of readability you should use the first statement. You want to count records, so count records, not expressions. Use COUNT(expression) only when you want to count non-null occurences of something.

I ran a quick test on SQL Server 2012 on an 8 GB RAM hyper-v box. You can see the results for yourself. I was not running any other windowed application apart from SQL Server Management Studio while running these tests.
My table schema:
CREATE TABLE [dbo].[employee](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_employee] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Total number of records in Employee table: 178090131 (~ 178 million rows)
First Query:
Set Statistics Time On
Go
Select Count(*) From Employee
Go
Set Statistics Time Off
Go
Result of First Query:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 35 ms.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 10766 ms, elapsed time = 70265 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Second Query:
Set Statistics Time On
Go
Select Count(1) From Employee
Go
Set Statistics Time Off
Go
Result of Second Query:
SQL Server parse and compile time:
CPU time = 14 ms, elapsed time = 14 ms.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 11031 ms, elapsed time = 70182 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
You can notice there is a difference of 83 (= 70265 - 70182) milliseconds which can easily be attributed to exact system condition at the time queries are run. Also I did a single run, so this difference will become more accurate if I do several runs and do some averaging. If for such a huge data-set the difference is coming less than 100 milliseconds, then we can easily conclude that the two queries do not have any performance difference exhibited by the SQL Server Engine.
Note : RAM hits close to 100% usage in both the runs. I restarted SQL Server service before starting both the runs.

SET STATISTICS TIME ON
select count(1) from MyTable (nolock) -- table containing 1 million records.
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 36 ms.
select count(*) from MyTable (nolock) -- table containing 1 million records.
SQL Server Execution Times:
CPU time = 46 ms, elapsed time = 37 ms.
I've ran this hundreds of times, clearing cache every time.. The results vary from time to time as server load varies, but almost always count(*) has higher cpu time.

There is an article showing that the COUNT(1) on Oracle is just an alias to COUNT(*), with a proof about that.
I will quote some parts:
There is a part of the database software that is called “The
Optimizer”, which is defined in the official documentation as
“Built-in database software that determines the most efficient way to
execute a SQL statement“.
One of the components of the optimizer is called “the transformer”,
whose role is to determine whether it is advantageous to rewrite the
original SQL statement into a semantically equivalent SQL statement
that could be more efficient.
Would you like to see what the optimizer does when you write a query
using COUNT(1)?
With a user with ALTER SESSION privilege, you can put a tracefile_identifier, enable the optimizer tracing and run the COUNT(1) select, like: SELECT /* test-1 */ COUNT(1) FROM employees;.
After that, you need to localize the trace files, what can be done with SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';. Later on the file, you will find:
SELECT COUNT(*) “COUNT(1)” FROM “COURSE”.”EMPLOYEES” “EMPLOYEES”
As you can see, it's just an alias for COUNT(*).
Another important comment: the COUNT(*) was really faster two decades ago on Oracle, before Oracle 7.3:
Count(1) has been rewritten in count(*) since 7.3 because Oracle like
to Auto-tune mythic statements. In earlier Oracle7, oracle had to
evaluate (1) for each row, as a function, before DETERMINISTIC and
NON-DETERMINISTIC exist.
So two decades ago, count(*) was faster
For another databases as Sql Server, it should be researched individually for each one.
I know that this question is specific for SQL Server, but the other questions on SO about the same subject (without mention a specific database) were closed and marked as duplicated from this answer.

In all RDBMS, the two ways of counting are equivalent in terms of what result they produce. Regarding performance, I have not observed any performance difference in SQL Server, but it may be worth pointing out that some RDBMS, e.g. PostgreSQL 11, have less optimal implementations for COUNT(1) as they check for the argument expression's nullability as can be seen in this post.
I've found a 10% performance difference for 1M rows when running:
-- Faster
SELECT COUNT(*) FROM t;
-- 10% slower
SELECT COUNT(1) FROM t;

COUNT(1) is not substantially different from COUNT(*), if at all. As to the question of COUNTing NULLable COLUMNs, this can be straightforward to demo the differences between COUNT(*) and COUNT(<some col>)--
USE tempdb;
GO
IF OBJECT_ID( N'dbo.Blitzen', N'U') IS NOT NULL DROP TABLE dbo.Blitzen;
GO
CREATE TABLE dbo.Blitzen (ID INT NULL, Somelala CHAR(1) NULL);
INSERT dbo.Blitzen SELECT 1, 'A';
INSERT dbo.Blitzen SELECT NULL, NULL;
INSERT dbo.Blitzen SELECT NULL, 'A';
INSERT dbo.Blitzen SELECT 1, NULL;
SELECT COUNT(*), COUNT(1), COUNT(ID), COUNT(Somelala) FROM dbo.Blitzen;
GO
DROP TABLE dbo.Blitzen;
GO

Related

Simple select query running forever

A simple SQL Select is query running forever for a particular ID in SQL Server 2012.
This query is running forever; it should return 10000 rows:
select *
from employees
where company_id = 34
If I change the query to
select *
from employees
where company_id = 12
it returns 7000 rows very quickly.
Employees is a view created by joining different tables.
Could there be a problem in the view?
One possibility is that you have a very large table. Such a query is probably scanning the entire tables and returning rows that match as they are encountered.
My guess is that rows for company 12 are encountered before rows for company 34.
If this is the case, then an index on (company_id) should help.
There may be other causes as well. Here are two other possibilities:
Contention for rows with company_id 34 that are causing delays on reading the data (this would depend on isolation level that you are using and the nature of concurrent updates).
An unlimited size column which is populated with very big values for company_id 34 and empty or very small for 12.
There may be other possibilities as well.
One of the things you can do to speed up the process is to index the column on company_id as a b-tree index would speed up the search.
Without looking at the structure of the table and execution plan, here are a few things that can be suggested apart from what Gordon has already covered:
Could you create indexes on the underlying tables which can cover this query? That would include index on the 'searched' and 'sorted' columns (joins, where clause, order by, group by, distinct) and include the SELECTED columns in the INCLUDE part of the indexes (in case of a nonclustered rowstore index)? Aim is to see 'index seek' in the Execution Plan.
Update statistics on the underlying tables. (And as a side note, would suggest to keep 'AUTO CREATE' and 'AUTO UPDATE' statistics ON unless you have a reason not to do that automatically in your application)
Would also like to know when was the last time defragmentation was performed on the server. Long due defragmentation could be a very good reason for why you might see this kind of issues on certain values, specially on a table on which lot of write operations happen.
Execute the query again. Even if you do not have proper information about #3 above, you can try to execute the query skipping step#3.
While running the query check for waits stats in in the server by
querying at dmvs: sys.dm_os_wait_stats and sys.dm_tran_locks. Please
check whether the wait is due to CXPACKET (waits due to other parallel
processes) or PAGEIOLATCH (Reading from Disk than RAM) or locks. It
is the starting point of the investigation which will give you the
root cause and you can then take appropriate measure accordingly.
Additional quick check can be: checking 'Available RAM' in the
server task manager. Please make sure that your SQL Server RAM is not used up by
some other unnecessary applications/sessions.

SQL Server choice wrong execution plan

When this query is executed, SQL Server chooses a wrong execution plan, why?
SELECT top 10 AccountNumber , AVERAGE
FROM [M].[dbo].[Account]
WHERE [Code] = 9201
Go
SELECT top 10 AccountNumber , AVERAGE
FROM [M].[dbo].[Account] with (index(IX_Account))
WHERE [Code] = 9201
SQL Server chooses the clustered PK index for this query and elapsed time = 78254 ms, but if I force SQL Server to choose a non-clustered index then elapsed time is 2 ms, Stats Account table is updated.
It's usually down to having bad statistics on the various indexes. Even with correct stats, an index can only hold so many samples and occasionally when there is a massive skew in the values then the optimiser can think that it won't find a sufficiently small number.
Also you can sometimes have a massive amount of [almost] empty blocks to read through with data values only at "the end". This can sometimes mean where you have a couple of otherwise close variations, one will require drastically more IO to burn through the holes. Likewise if you don't actually have 10 values for 9201 it will have to do an entire table scan if it choses the PK/CI rather than a more fitting index. This is more prevalent when you've done plenty of deletes.
Try updating the stats on the various indexes and things like that & see if it changes anything. 78 seconds is a lot of IO on a single table scan.

What makes one of these queries faster?

I have a sql query that I tried executing (below) that took 10 seconds to run, and since it was on a production environment I stopped it just to be sure there is no sql locking going on
SELECT TOP 1000000 *
FROM Table T
Where CONVERT(nvarchar(max), T.Data) like '%SearchPhrase%' --T.Data is initially XML
Now if I add an order by on creation time (which I do not believe is an index), it takes 2 seconds and is done.
SELECT TOP 1000000 *
FROM Table T
Where CONVERT(nvarchar(max), T.Data) like '%SearchPhrase%' --T.Data is initially XML
order by T.CreatedOn asc
Now the kicker is that only about 3000 rows are returned, which tells me that even with the TOP 1000000 it isn't stopping short on which rows it's still going through all the rows.
I have a basic understanding of how SQL server works and how the query parsing works, but I'm just confused as to why the order by makes it so much faster in this situation.
The server being run is SQL server 2008 R2
The additional sort operation is apparently enough in this case for SQL Server to use a parallel plan.
The slower one (without ORDER BY) is a serial plan whereas the faster one has a DegreeOfParallelism of 24 meaning that the work is being done by 24 threads rather than just a single one.
This explains the much reduced elapsed time despite the additional work required for the sort.

Oracle sql benchmark

I have to benchmark a query - currently I need to know how adding parameter to select result set(FIELD_DATE1) will affect sql execution time. There is administration restrictions in db so I can not use debug. So I wrote a query:
SELECT COUNT(*), MIN(XXXT), MAX(XXXT)
FROM ( select distinct ID AS XXXID, sys_extract_utc(systimestamp) AS XXXT
, FIELD_DATE1 AS XXXUT
from XXXTABLE
where FIELD_DATE1 > '20-AUG-06 02.23.40.010000000 PM' );
Will output of query show real times of query execution
There is a lot to learn when it comes to benchmarking in Oracle. I recommend you to begin with the items below even though It worries me that you might not have enough restrictions in db since some of these features could require extra permissions:
Explain Plan: For every SQL statement, oracle has to create an execution plan, the execution plan defines how to information will be read/written. I.e.: the indexes to use, the join method, the sorts, etc.
The Explain plan will give you information about how good your query is and how it is using the indexes. Learning the concept of a query cost for this is key, so take a look to it.
TKPROF: it's an Oracle tool that allows you to read oracle trace files. When you enable timed statistics in oracle you can trace your sql statements, the result of this traces are put in files; You can read these files with TKPROF.
Among the information TKPROF will let you see is:
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
See: Using SQL Trace and TKPROF
It's possible in this query that SYSTIMESTAMP would be evaluated once, and the same value associated with every row, or that it would be evaluated once for each row, or something in-between. It also possible that all the rows would be fetched from table, then SYSTIMESTAMP evaluated for each one, so you wouldn't be getting an accurate account of the time taken by the whole query. Generally, you can't rely on order of evaluation within SQL, or assume that a function will be evaluated once for each row where it appears.
Generally the way I would measure execution time would be to use the client tool to report on it. If you're executing the query in SQLPlus, you can SET TIMING ON to have it report the execution time for every statement. Other interactive tools probably have similar features.

Select query takes 3 seconds to pull 330 records. Need Optimization

I have and normal select query that take nearly 3 seconds to execute (Select * from users). There are only 310 records in the user table.
The configuration of the my production server is
SQl Server Express Editon
Server Configuration : Pentium 4 HT, 3 ghz , 2 GB ram
Column Nam Type NULL COMMENTS
Column Nam Type NULL
user_companyId int Not Null
user_userId int Not Null Primary Column
user_FirstName nvarchar(50) Null
user_lastName nvarchar(60) Null
user_logon nvarchar(50) Null
user_password nvarchar(255) Null
user_emailid nvarchar(255) Null
user_status bit Null
user_Notification bit Null
user_role int Null
user_verifyActivation nvarchar(255) Null
user_verifyEmail nvarchar(255) Null
user_loginattempt smallint Null
user_createdby int Null
user_updatedby int Null
user_createddate datetime Null
user_updateddate datetime Null
user_Department nvarchar(1000) Null
user_Designation nvarchar(1000) Null
As there is no where clause this isn't down to indexes etc, Sql will do a full table scan and return all the data. I'd be looking at other things running on the machine, or SQL having run for a long time and used up a lot of VM. Bottom line is that this isn't a SQL issue - it's a machine issue.
Is anything else happening on this machine?
Even if you made worst possible data structure, SELECT * FROM Users should not take 3 seconds for 310 records. Either there is more (a lot more) records inside or there is some problem outside of SQL server (some other process blocking code or hardware issues).
Well, indexes don't much matter here--you're getting a table scan with that query.
So, there are only a few other items that could be bogging this down. One is row size. What columns do you have? If you have tons of text or image columns, this could be causing a delay in bringing these back.
As for your hardware, what's your HDD's RPMs? Remember, this is reading off of a disk, so if there are any other IO tasks being carried out, this will obviously cause a performance hit.
There's a number of things you should consider:
Don't use the Express edition, it's probably called that for a reason. Go with a real DBMS (and, yes, this includes the non-express SQL Server).
Use of "select * from" is always a bad idea unless you absolutely need every column. Change it to get only the columns you need.
Are you using a "where" or "order by" clause. If so, you need to ensure you have indexes set up correctly (even for 330 rows, since tables always get bigger than you think).
Use EXPLAIN, or whatever tool Microsoft provides as an equivalent. It will show you why the query is running slow.
If your DB is on a different machine, there may be network issues (not necessarily problems, you may have stateful packet filters that slow down connections, for example).
Examine the system loads of the boxes you're using. If there are other processes using a lot of CPU grunt or disk I/O, they may be causing slowdown.
The best thing to do is profile the server and also pay attention to what kinds of locks are occurring during the query.
If you are using any isolation level options, or the default, determine if you can lower the isolation level, which will decrease the locking occurring on the table. The more locking that occurs, the more conflicts you will have where the query has to wait for other queries to complete. However, be very careful when lowering the isolation level, because you need to understand how that will effect the data you get back.
Determine if you can add where criteria to your query. Restricting the records you query can reduce locks.
Edit: If you find locking issues, you need to also evaluate any other queries that are being run.
If it's consistently 3 seconds, the problem isn't the query or the schema, for any reason that wouldn't be obviously irrational to the designer. I'd guess it's hardware or network issues. Is there anything you can do with the database that doesn't take 3 seconds? What do SET STATISTICS IO ON and SET STATISTICS TIME ON tell you? I bet there's nothing there that supports 3 seconds.
Without a better indexing strategy, leaving certain columns out will only reduce the impact on the network, which shouldn't be awfulf for only 310 rows. My guess is that it's a locking issue.
So consider using:
SELECT * FROM Users (NOLOCK);
This will mean that you don't respect any locks that are currently on the table by other connections. You may get 'dirty reads', seeing data which hasn't been committed yet - but it could be worthwhile from a performance perspective.
Let's face it - if you've considered the ramifications, then it should be fine...
Rob
The first thing you should do for any performance problem is to be an execution plan for the query - this is a representation of what run path SQL server chooses when it executes your query. Best place to look for info on how to do this is Google - you want a statistics plan as it includes information about how many rows are returned.
This doesn't sound like a problem with the execution plan however, as the query is so simple - in fact I'm fairly sure that query counts as a "trivial plan", i.e. there is only 1 possible plan. 
This leaves locking or hardware issues (is the query only slow on your production database, and is it always slow or does the execution time vary?) the query will attemp to get a shared lock on the whole table - if anyone is writing then you will be blocked from Reading until the writer is finished. You can check to see if this us the source of your problem by looking at the DMVs see http://sqllearningsdmvdmf.blogspot.com/2009/03/top-10-resource-wait-times.html
Finally, there are restrictions on SQL express in terms of CPU utilisation, memory use etc... What is the load on your server like? (operations per second)
Without the table structure to know what your table looks like, we can't answer such a question....
What about not using SELECT * FROM Users, but actually specify which fields you really need from the table??
SELECT user_companyId, user_userId,
user_FirstName, user_lastName, user_logon
FROM Users
How does that perform?? Do you still need 3 seconds for this query, or is this significantly faster?
If you really need all the users and all their attributes, then maybe that's just the time it takes on your system to retrieve that amount of data. The best way to speed up things is to limit the attributes retrieved (do you really need the user's photo??) by specifying a list of fields, and to limit the number of elements retrieved using a WHERE clause (do you REALLY need ALL users? Not just those that.........)
Marc
is there a possible way that the performance may degrade based on the column size (length of the data in the column)..
in your table u got last 2 columns with the size of NVARCHAR(1000), is it always filled with that amount of data..??
am not a sql expert, but consider the packetsize its about to return for 310 records with these 2 columns & without ll be different..
i saw similar post here in stack.. u can just go through this
performance-in-sql