Query execution time reduction - sql

How to reduce the execution time of this simple SQL query in SQL Server?
select *
from companybackup
where tiRecordStatus = 1
and (iAccessCode < 3 or chUpdateBy = SUSER_SNAME())
It has nearly 38,681 rows and taking nearly 10 minutes 23 seconds. The table is having 50 columns and even I created indexes on all columns to reduce the time but it didn't succeed, even I checked with nolock option and all the available solutions but couldn't reduce the execution time.
What might be the issue?

If this is a critical query, you can create an index designed just for it:
CREATE INDEX ix_MyIndexNameHere ON CompanyBackup
(tiRecordStatus, iAccesscode, chUpdateBy)
This will still require a key lookup since you are returning all fields with *, but it's a lot better than a bunch of single-field indexes.

I am wondering what does the SUSER_SNAME() function do, now it gets executed on every row. Can you try getting that function return value to parameter and then execute the query?
If the function is costly this will reduce execution time by a huge margin.
edit. Can anyone tell me why I can not post some SQL clauses and some I can? Cant post DECLARE syntax.

You haven't given much info, but splitting OR into a union of two queries can often help (optimisers have trouble with OR):
select *
from companybackup
where tiRecordStatus = 1
and iAccessCode < 3
union
select *
from companybackup
where tiRecordStatus = 1
and chUpdateBy = SUSER_SNAME())
If this doesn't help, try creating these indexes:
create index idx1 on companybackup(tiRecordStatus, iAccessCode);
create index idx2 on companybackup(chUpdateBy, tiRecordStatus);
Then force a recalculation if the distribution of index values:
update statistics companybackup;

Related

Fastest way to do SELECT * WHERE not null

I'm wondering what is the fastest way to get all non null rows. I've thought of these :
SELECT * FROM table WHERE column IS NOT NULL
SELECT * FROM table WHERE column = column
SELECT * FROM table WHERE column LIKE '%'
(I don't know how to measure execution time in SQL and/or Hive, and from repeatedly trying on a 4M lines table in pgAdmin, I get no noticeable difference.)
You will never notice any difference in performance when running those queries on Hive because these operations are quite simple and run on mappers which are running in parallel.
Initializing/starting mappers takes a lot more time than the possible difference in execution time of these queries and adds a lot of heuristics to the total execution time because mappers may be waiting resources and not running at all.
But you can try to measure time, see this answer about how to measure execution time: https://stackoverflow.com/a/44872319/2700344
SELECT * FROM table WHERE column IS NOT NULL is more straightforward (understandable/readable) though all of queries are correct.

oracle functional index performance

I have a table with 226 million rows that has a varchar2(2000) column. The first 10 characters are indexed using a functional index SUBSTR("txtField",1,10).
I am running a query such as this:
select count(1)
from myTable
where SUBSTR("txtField",1,10) = 'ABCDEFGHIJ';
The value does not exist in the database so the return in "0".
The explain plan shows that the operation performed is "INDEX (RANGE SCAN)" which I would assume and the cost is 4. When I run this query it takes on average 114 seconds.
If I change the query and force it to not use the index:
select count(1)
from myTable
where SUBSTR("txtField",1,9) = 'ABCDEFGHI';
The explain plan shows the operation will be a "TABLE ACCESS (FULL)" which makes sense. The cost is 629,000. When I run this query it takes on average 103 seconds.
I am trying to understand how scanning an index can take longer than reading every record in the table and performing the substr function on a field.
Followup:
There are 230M+ rows in the table and the query returns 17 rows; I selected a new value that is in the database. Initially I was executing with a value that was not in the database and returned zero rows. It seems to make no difference.
Querying for information on the index yields:
CLUSTERING_FACTOR=201808147
LEAF_BLOCKS=1131660
I am running the query with AUTOTRACE ON and the gather_plan_statistics and will add those results when they are available.
Thanks for all the suggestions.
There's a lot of possibilities.
You need to look at the actual execution plan, though.
You can run the query with the /*+ gather_plan_statistics */ hint, and then execute:
select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
You should also look into running a trace/tkprof to see what is actually happening - your DBA should be able to assist you with this.

sql: how to improve this statment

I have the following sql statement that I need to make quicker.
There are 500k rows, and I an index for 'HARDWARE_ID', but this still take up to a second to perform.
Does anyone have any ideas?
select
*
from
DEVICE_MONITOR DM
where
DM.DM_ID = (
select
max(DM_ID)
from
DEVICE_MONITOR
where
HARDWARE_ID=#value#
)
I've found the following index is also a great help...
CREATE INDEX DM_IX4 ON DEVICE_MONITOR (DM_ID, HARDWARE_ID);
In my test it drops the runtime from 26seconds to 20 seconds.
Thanks for all your help.
The index for the DM_ID should be created as asc
The problem might be in this that You found very fast match form HARDWARE_ID, but then those records have to sorted to fetch max from them and this operation is time consuming.
Try to compare this statements:
1 #result = select max(DM_ID) from DEVICE_MONITOR where HARDWARE_ID=#value#
2 select * from DEVICE_MONITOR DM where DM.DM_ID = #result
The query 1 is the problem, as you shall see that the 2 is working faster
if the index is created, and the query still works slowly than, you may update the statistics. But other queries will probably work only slower.
If is possible instead of * use only column that You really need
Consider changing '*' to only list of attributes you need. Very often this can give you substantial increase in speed.
If you have a clustered index on DM_ID, then that looks like the fastest query.
Edit: Ack. Daniel has the correct answer. Missed that.

sql query taking too long

I have simple "insert into.." query which is taking around 40 seconds to execute. It simply takes records from one table and inserts into another.
I have index on F1 and BatchID on the tbl_TempCatalogue table
146624 records are affected.
select itself is not slow, insert into is slow.
The complete query is:
insert into tbl_ItemPrice
(CATALOGUEVERSIONID,SERIESNUMBER,TYPE,PRICEFIELD,PRICE,
PRICEONREQUEST,recordid)
select 296 as CATALOGUEVERSIONID
,ISNULL(F2,'-32768') as SERIESNUMBER
,ISNULL(F3,'-32768') as TYPE
,ISNULL(F4,'-32768') as PRICEFIELD,F5 as PRICE
,(case when F6 IS NULL then null when F6 = '0' then 'False'
else 'True' end ) as PRICEONREQUEST
,newid()
from tbl_TempCatalogue
where F1 = 450
and BATCHID = 72
Thanks
It's possible that if your table is large, you'd benefit from indexes on F1 and BATCHID in the tbl_TempCatalogue table. It's not clear what DBMS you're using, but most have decent tools to show you an execution plan. If you're doing full table scans on a large table, that may take a long time to run.
Also, you say that the "insert into" is slow, but you include just the code for the select. Is the select slow by itself?
You say the problem lies with the INSERT not the SELECT. So possible culprits are (in no fixed order):
triggers
storage allocation
foreign key validation
check constraint validation
i/o bottlenecks
faulty disk
contention with other sessions
What tools you have to undertake diagnosis will depend on which database product (and which version of which database) you are using. Please include these details in your question.
Is there an index on the tbl_TempCatalogue table to help the database find the rows where F1=450 and BATCHID=72?
Otherwise, it will probably need to scan the entire table to find them.
Your query looks fine ..but my concern is in newid() function , what logic written there that may hit the performance, plese try to run without newid() and see the execution time of select statement..
to solve such type of issue follow the steps
see execution time only for select statement
see execution time of insert statement with select
see execution time for select statement without newid() function
after comparing those time slots you will be able to locate the exact root cause that where the problem reside..after that please post that time slots so that we will try to sole this issue..

SQL massive performance difference using SELECT TOP x even when x is much higher than selected rows

I'm selecting some rows from a table valued function but have found an inexplicable massive performance difference by putting SELECT TOP in the query.
SELECT col1, col2, col3 etc
FROM dbo.some_table_function
WHERE col1 = #parameter
--ORDER BY col1
is taking upwards of 5 or 6 mins to complete.
However
SELECT TOP 6000 col1, col2, col3 etc
FROM dbo.some_table_function
WHERE col1 = #parameter
--ORDER BY col1
completes in about 4 or 5 seconds.
This wouldn't surprise me if the returned set of data were huge, but the particular query involved returns ~5000 rows out of 200,000.
So in both cases, the whole of the table is processed, as SQL Server continues to the end in search of 6000 rows which it will never get to. Why the massive difference then? Is this something to do with the way SQL Server allocates space in anticipation of the result set size (the TOP 6000 thereby giving it a low requirement which is more easily allocated in memory)?
Has anyone else witnessed something like this?
Thanks
Table valued functions can have a non-linear execution time.
Let's consider function equivalent for this query:
SELECT (
SELECT SUM(mi.value)
FROM mytable mi
WHERE mi.id <= mo.id
)
FROM mytable mo
ORDER BY
mo.value
This query (that calculates the running SUM) is fast at the beginning and slow at the end, since on each row from mo it should sum all the preceding values which requires rewinding the rowsource.
Time taken to calculate SUM for each row increases as the row numbers increase.
If you make mytable large enough (say, 100,000 rows, as in your example) and run this query you will see that it takes considerable time.
However, if you apply TOP 5000 to this query you will see that it completes much faster than 1/20 of the time required for the full table.
Most probably, something similar happens in your case too.
To say something more definitely, I need to see the function definition.
Update:
SQL Server can push predicates into the function.
For instance, I just created this TVF:
CREATE FUNCTION fn_test()
RETURNS TABLE
AS
RETURN (
SELECT *
FROM master
);
These queries:
SELECT *
FROM fn_test()
WHERE name = #name
SELECT TOP 1000 *
FROM fn_test()
WHERE name = #name
yield different execution plans (the first one uses clustered scan, the second one uses an index seek with a TOP)
I had the same problem, a simple query joining five tables returning 1000 rows took two minutes to complete. When I added "TOP 10000" to it it completed in less than one second. It turned out that the clustered index on one of the tables was heavily fragmented.
After rebuilding the index the query now completes in less than a second.
Your TOP has no ORDER BY, so it's simply the same as SET ROWCOUNT 6000 first. An ORDER BY would require all rows to be evaluated first, and it's would take a lot longer.
If dbo.some_table_function is a inline table valued udf, then it's simply a macro that's expanded so it returns the first 6000 rows as mentioned in no particular order.
If the udf is multi valued, then it's a black box and will always pull in the full dataset before filtering. I don't think this is happening.
Not directly related, but another SO question on TVFs
You may be running into something as simple as caching here - perhaps (for whatever reason) the "TOP" query is cached? Using an index that the other isn't?
In any case the best way to quench your curiosity is to examine the full execution plan for both queries. You can do this right in SQL Management Console and it'll tell you EXACTLY what operations are being completed and how long each is predicted to take.
All SQL implementations are quirky in their own way - SQL Server's no exception. These kind of "whaaaaaa?!" moments are pretty common. ;^)
It's not necessarily true that the whole table is processed if col1 has an index.
The SQL optimization will choose whether or not to use an index. Perhaps your "TOP" is forcing it to use the index.
If you are using the MSSQL Query Analyzer (The name escapes me) hit Ctrl-K. This will show the execution plan for the query instead of executing it. Mousing over the icons will show the IO/CPU usage, I believe.
I bet one is using an index seek, while the other isn't.
If you have a generic client:
SET SHOWPLAN_ALL ON;
GO
select ...;
go
see http://msdn.microsoft.com/en-us/library/ms187735.aspx for details.
I think Quassnois' suggestion seems very plausible. By adding TOP 6000 you are implicitly giving the optimizer a hint that a fairly small subset of the 200,000 rows are going to be returned. The optimizer then uses an index seek instead of an clustered index scan or table scan.
Another possible explanation could caching, as Jim davis suggests. This is fairly easy to rule out by running the queries again. Try running the one with TOP 6000 first.