INSERT INTO tab2 NOLOGGING
SELECT
ID,
ORG_NAME
FROM tab3
WHERE (( upper(NVL(org_name,company_given)) LIKE '%MSOFT%'
OR upper(NVL(org_name,company_given)) LIKE 'M SOFT'
OR upper(NVL(org_name,company_given)) LIKE '%MISOFT%'
OR upper(NVL(org_name,company_given)) LIKE 'MSN %'
OR upper(NVL(org_name,company_given)) LIKE '%N APP%'
OR upper(NVL(org_name,company_given)) LIKE '%NAPP%'
OR upper(NVL(org_name,company_given)) LIKE '%NAPPE%'
OR upper(NVL(org_name,company_given)) LIKE '%NAPPS%'
OR upper(NVL(org_name,company_given)) LIKE '%NEK%APPLIANCE%'
the above coding is taking too much time. Table tab3 is very huge.
The above is dynamic. Any alternatives for nvl?
The line below
OR upper(NVL(org_name,company_given)) LIKE 'M SOFT'
could be replaced with
OR ((orgname is not null and upper(org_name) LIKE 'M SOFT')
OR ((orgname is null and upper(company_given) LIKE 'M SOFT')
Not sure it's faster.
Also you can try to run it once with subquery
SELECT *
FROM (
SELECT
ID,
ORG_NAME,
upper(NVL(org_name,company_given)) as name_for_filter
FROM tab3)
WHERE name_for_filter LIKE '%MSOFT%'
OR name_for_filter LIKE 'M SOFT'
...
The best way would be to introduce a name_for_filter column in the table and fill it once with a trigger. Then the column could be used for the filtering
This query is going to execute a full table scan of your table. You say that table is huge, so it's going to take a long time.
A normal index won't help because there are two columns in play. Even a function-based index like this ...
create index fbi3 on tab3( upper(NVL(org_name, company_given) ))
... won't help because indexes are useless against a like filter with a wildcard at the front, and you have those:
LIKE '%NEK%APPLIANCE%'
If this is a one-time exercise I would suggest you swallow the time and wait for the statement to finish. But let's assume you want to do this kind of query often. If so, it's worth building infrastructure to support it.
A new column for the search criteria. Basically a column which is pre-populated with the arguments used in the functions. For 11g or higher make this a virtual column:
alter table tab3 add search_name as ( upper(NVL(org_name, company_given)));
If using an older version of the database you will have to build a normal column and populate it with triggers.
Build a Text index on the search_name column. As it is short you can use a CTXCAT index, which will be maintained transactionally.
Then you need to rewrite the query to use catsearch() syntax instead of like operator. Find out more
As already suggested, it's probably best to create a prepared search column. You could even remove the spaces to avoid a search for both 'N APP' and 'NAPP' for example (but that could lead to false positives in some cases).
On top of that you can remove the check for %NAPPE% and %NAPPS% because you already include records containing %NAPP%
It should be faster when using:
pseudocode:
'MSN %'
or ('%SOFT%' and ('M SOFT' or '%MSOFT%' or '%MISOFT%'))
or ('%APP%' and ('%N APP%' or '%NAPP%' or '%NEK%APPLIANCE%'))
If SOFT or APP is not found there is no need to check for the others containing the same word - the and will avoid that if the first part is already false.
If this is just an example and those parameters are variable, you could write some code to optimize those search terms (unless the SQL server already does that).
Related
I'm always being given a large list of say id's which I need to search in our database have manually put them into a sql statement like the follow which can take a while putting single quotes around each number followed by a comma, I was hoping someone has a easy way of doing this for me? Or am I just being a bit lazy...
select * from blah where idblah in ('1234-A', '1235-A', '1236-A' ................)
You can use the worlds' simplest code generator.
Just paste in the list of values, setup the pattern and voila... you have a set of quoted values.
I have also used Excel in the past, using the CONCAT function with smart paste.
I would set aside a table to hold the values and have my queries JOIN against that table. Set up a simple import script (don't forget to clear out the table at the start) and something like this is a breeze. Run the import, run the query. You never have to touch the query again or regenerate any code.
As an example:
CREATE TABLE Search_ID_List (
id VARCHAR(20) NOT NULL,
CONSTRAINT PK_Search_ID_List PRIMARY KEY CLUSTERED (id)
)
and:
SELECT
<column list>
FROM
Search_ID_List SIL
INNER JOIN Blah B ON
B.id = SIL.id
If you want to be able to save past search criteria or have multiple searches available to you at the same time then you can just add an identifying column which gets filled in by your import. It can be the file from where the ids came, some descriptive code/name, or whatever. Then just add that to the WHERE clause of your query and you're all set.
You could do something like this.
select * from blah where ',' + '1234-A,1235-A,1236-A' + ',' LIKE ',%' + idblah + '%,'
This pattern is super useful when you're being passed a comma delimited list of values to filter by, but I think would be applicable here as well.
I write some queries like this:
SELECT *
FROM Sample.dbo.Temp
WHERE
Name LIKE '%ab%'
OR Name LIKE '%fg%'
OR NAME LIKE '%asd%'
OR ...
OR ...
OR Name LIKE '%kj%'
Is there anyway I can rewrite this query like this:
SELECT *
FROM Sample.dbo.Temp
WHERE
Name LIKE (
'%ab%'
OR '%fg%'
OR '%asd%'
OR ...
OR ...
OR '%kj%'
)
Just looks more comfortable both from a readability point of view and manageability. If the column Name changes, I can always make one change instead of a hundred (or using Find and Replace). Any suggestions?
No, you have to keep repeating the LIKE
Although you could probably fool around a bit to make it work something like that, it won't be prettier or more readable.
Perhaps you should generate the query programmatically instead of manually writing this?
PS: perhaps a fulltext index is a better idea here?
You can put the values in a table, perhaps a CTE, and semijoin to your table e.g.
WITH params
AS
(
SELECT *
FROM (
VALUES ('at'),
('fg'),
('asd'),
('kj')
) AS T (param)
)
SELECT *
FROM Sample.dbo.Temp T
WHERE EXISTS (
SELECT *
FROM params P
WHERE T.Name LIKE '%' + P.param + '%'
);
That looks long winded but if the CTE was instead a base table them the query could be data-driven i.e. if the list of parameter values need to change in the future then it would involve merely updating a table rather than amending hard-coded values (possibly in multiple objects).
I have a Stored Proc which performs search on records.
The problem is that some of the search criteria,which are coming from UI, may be empty string.
So, when the criteria not specified, the LIKE statement becomes redundant.
How can I effectively perform that search or Sql Server? Or, Does it optimize LIKE('%%') query since it means there is nothing to compare?
The Stored proc is like this:
ALTER PROC [FRA].[MCC_SEARCH]
#MCC_Code varchar(4),
#MCC_Desc nvarchar(50),
#Detail nvarchar(50)
AS
BEGIN
SELECT
MCC_Code,
MCC_Desc,
CreateDate,
CreatingUser
FROM
FRA.MCC (NOLOCK)
WHERE
MCC_Code LIKE ('%' + #MCC_Code + '%')
AND MCC_Desc LIKE ('%' + #MCC_Desc + '%')
AND Detail LIKE ('%' + #Detail + '%')
ORDER BY MCC_Code
END
With regard to an optimal, index-using execution plan - no. The prefixing wildcard prevents an index from being used, resulting in a scan instead.
If you do not have a wildcard on the end of the search term as well, then that scenario can be optimised - something I blogged out a while back: Optimising wildcard prefixed LIKE conditions
Update
To clarify my point:
LIKE 'Something%' - is able to use an index
LIKE '%Something' - is not able to use an index out-of-the-box. But you can optimise this to allow it to use an index by following the "REVERSE technique" I linked to.
LIKE '%Something%' - is not able to use an index. Nothing you can do to optimise for LIKE.
The short answer is - no
The long answer is - absolutely not
Does it optimize LIKE('%%') query since it means there is nothing to compare?
The statement is untrue, because there is something to compare. The following are equivalent
WHERE column LIKE '%%'
WHERE column IS NOT NULL
IS NOT NULL requires a table scan, unless there are very few non-null values in the column and it is well indexed.
EDIT
Resource on Dynamic Search procedures in SQL Server:
You simply must read this article by Erland Sommarskog, SQL Server MVP http://www.sommarskog.se/dyn-search.html (pick your version, or read both)
Otherwise if you need good performance on CONTAINS style searches, consider using SQL Server Fulltext engine.
If you use a LIKE clausule, and specify a wildcard-character (%) as a prefix of the searchstring, SQL Server (and all other DBMS'es I guess) will not be able to use indexes that might exists on that column.
I don't know if it optimizes the query if you use an empty search-argument ... Perhaps your question may be answered if you look at the execution plan ?
Edit: I've just checked this out, and the execution plan of this statement:
select * from mytable
is exactly the same as this the exec plan of this statement:
select * from mytable where description like '%'
Both SQL statements simply use a clustered index scan.
Really simple, I want to select all titles that starts with letter 'A' while ignoring the dash at the beginning of the string.
SELECT * TRIM(LEADING '- ' FROM title) WHERE title LIKE 'A%'
This just doesn't seem to work. Please help. Thanks.
If you want decent speed, you should avoid per-row functions as much as possible. Normally, I'd suggest an insert/update trigger to store modified data so as to amortise the cost of the calculation across all selects (see here for an explanation along with some other guidelines I follow) but, for this particular case, there's an easier way *a.
Just use:
select * from table where title like 'A%' or title like '- A%'
If the execution engine of your DBMS is any good, it will turn that into two very fast (assuming title is indexed) passes of the data without having to process every single row in the table.
If your execution engine is not that smart, try:
select * from table where title like 'A%'
union all select * from table where title like '- A%'
to see if that helps. Make sure you use union all, not just union. The latter will attempt to remove duplicates, unnecessary in this case since a title cannot start with both "A" and "- A".
And, as with all optimisations, measure, don't guess!
*a You may still want to consider using the trigger/extra-column method if there's a chance you want a case-insensitive search. The method in this answer will work okay for case-insensitivity on the first letter (a%/A%/-a%/-A%) but will quickly degrade if you're looking for something like items starting with Bill which would require 32 separate clauses (24 character combinations with and without the hyphen prefix).
SELECT TRIM(LEADING '- ' FROM title) FROM [missing table here] WHERE TRIM(LEADING '- ' FROM title) LIKE 'A%'
SELECT * TRIM(LEADING '- ' FROM title) WHERE title LIKE 'A%'
Your where clause will never work, as you're trying to compare to the POST-trim data, but the database is doing the comparison on pre-TRIM.
There's two choices:
SELECT *
FROM table
WHERE title LIKE '- A%'
or
SELECT *, TRIM(LEADING '- ' FROM title) AS title
FROM table
WHERE title LIKE 'A%'
Of the two, the first one is preferable, as it allows indexes to be used. The second one, since you're comparing against dynamically derived data, will not.
I am new to SQL programming.
I have a table job where the fields are id, position, category, location, salary range, description, refno.
I want to implement a keyword search from the front end. The keyword can reside in any of the fields of the above table.
This is the query I have tried but it consist of so many duplicate rows:
SELECT
a.*,
b.catname
FROM
job a,
category b
WHERE
a.catid = b.catid AND
a.jobsalrange = '15001-20000' AND
a.jobloc = 'Berkshire' AND
a.jobpos LIKE '%sales%' OR
a.jobloc LIKE '%sales%' OR
a.jobsal LIKE '%sales%' OR
a.jobref LIKE '%sales%' OR
a.jobemail LIKE '%sales%' OR
a.jobsalrange LIKE '%sales%' OR
b.catname LIKE '%sales%'
For a single keyword on VARCHAR fields you can use LIKE:
SELECT id, category, location
FROM table
WHERE
(
category LIKE '%keyword%'
OR location LIKE '%keyword%'
)
For a description you're usually better adding a full text index and doing a Full-Text Search (MyISAM only):
SELECT id, description
FROM table
WHERE MATCH (description) AGAINST('keyword1 keyword2')
SELECT
*
FROM
yourtable
WHERE
id LIKE '%keyword%'
OR position LIKE '%keyword%'
OR category LIKE '%keyword%'
OR location LIKE '%keyword%'
OR description LIKE '%keyword%'
OR refno LIKE '%keyword%';
Ideally, have a keyword table containing the fields:
Keyword
Id
Count (possibly)
with an index on Keyword. Create an insert/update/delete trigger on the other table so that, when a row is changed, every keyword is extracted and put into (or replaced in) this table.
You'll also need a table of words to not count as keywords (if, and, so, but, ...).
In this way, you'll get the best speed for queries wanting to look for the keywords and you can implement (relatively easily) more complex queries such as "contains Java and RCA1802".
"LIKE" queries will work but they won't scale as well.
Personally, I wouldn't use the LIKE string comparison on the ID field or any other numeric field. It doesn't make sense for a search for ID# "216" to return 16216, 21651, 3216087, 5321668..., and so on and so forth; likewise with salary.
Also, if you want to use prepared statements to prevent SQL injections, you would use a query string like:
SELECT * FROM job WHERE `position` LIKE CONCAT('%', ? ,'%') OR ...
I will explain the method i usally prefer:
First of all you need to take into consideration that for this method you will sacrifice memory with the aim of gaining computation speed.
Second you need to have a the right to edit the table structure.
1) Add a field (i usually call it "digest") where you store all the data from the table.
The field will look like:
"n-n1-n2-n3-n4-n5-n6-n7-n8-n9" etc.. where n is a single word
I achieve this using a regular expression thar replaces " " with "-".
This field is the result of all the table data "digested" in one sigle string.
2) Use the LIKE statement %keyword% on the digest field:
SELECT * FROM table WHERE digest LIKE %keyword%
you can even build a qUery with a little loop so you can search for multiple keywords at the same time looking like:
SELECT * FROM table WHERE
digest LIKE %keyword1% AND
digest LIKE %keyword2% AND
digest LIKE %keyword3% ...
You can find another simpler option in a thread here: Match Against.. with a more detail help in 11.9.2. Boolean Full-Text Searches
This is just in case someone need a more compact option. This will require to create an Index FULLTEXT in the table, which can be accomplish easily.
Information on how to create Indexes (MySQL): MySQL FULLTEXT Indexing and Searching
In the FULLTEXT Index you can have more than one column listed, the result would be an SQL Statement with an index named search:
SELECT *,MATCH (`column`) AGAINST('+keyword1* +keyword2* +keyword3*') as relevance FROM `documents`USE INDEX(search) WHERE MATCH (`column`) AGAINST('+keyword1* +keyword2* +keyword3*' IN BOOLEAN MODE) ORDER BY relevance;
I tried with multiple columns, with no luck. Even though multiple columns are allowed in indexes, you still need an index for each column to use with Match/Against Statement.
Depending in your criterias you can use either options.
I know this is a bit late but what I did to our application is this. Hope this will help someone tho. But it works for me:
SELECT * FROM `landmarks` WHERE `landmark_name` OR `landmark_description` OR `landmark_address` LIKE '%keyword'
OR `landmark_name` OR `landmark_description` OR `landmark_address` LIKE 'keyword%'
OR `landmark_name` OR `landmark_description` OR `landmark_address` LIKE '%keyword%'