Oracle Index query is not working - sql

I want to improve the performance of a simple query, typical structure like that:
SELECT title,datetime
FROM LICENSE_MOVIES
WHERE client='Alex'
As you can read in different websites,like this, you should make an index like that:
CREATE INDEX INDEX_LICENSE_MOVIES
ON LICENSE_MOVIES(client);
But there is any performance in the query, it is like it where "ignoring" the index.
I have try to use hints like this webpage says.
And the query result like this:
SELECT /*+ INDEX(LICENSE_MOVIES INDEX_LICENSE_MOVIES) */ title, datetime
FROM LICENSE_MOVIES
WHERE client='Alex'
Is there is any error in this syntax? Why couldn't I appreciate any improvement?

Oracle has a smart optimizer. It does not always use indexes -- in fact, you might be surprised to learn that sometimes using an index is exactly the wrong thing to do.
In your case, your data fits on a handful of data pages (well, dozens). The question is: How many "Alex"s are in the data. If there is just one, then Oracle should use the index, as following:
Oracle looks up the row containing "Alex" in the index.
Oracle identifies the data page where the row is located.
Oracle loads the data page.
Oracle processes the query and returns the results.
If lots of rows (say more than a few dozen) are for "Alex", then the optimizer is going to "think" . . . "Gosh, I need to read every data page anyway. Let me avoid using the index and just scan all the data."
Of course, this decision is based on the available statistics (which might be inaccurate or out-of-date). But there are definitely circumstances where a full table scan is the right approach, even when an index is available.

Related

How to speed up this LIKE query?

I have a like query that’s processing millions of rows:
SELECT
sample_id,
REPLACE( sample_id, '*', '') AS term
FROM
sample.table
WHERE
sample_id LIKE '%*%'
ORDER BY
sample_id ASC;
I tried batching the queries but its still too slow to process. Have someone experienced this in the past and successfully solved this? I’m basically open to any ideas at this point. Thanks!
You did not mention which RDBMS you are using, but you can speed up processing by using properly designed index.
Index properties (basing on Microsoft SQL Server RDBMS):
filtered index:
you can implement a filtered index. Filter corresponds to the WHERE clause from your query. You can add "sample_id LIKE '%*%'" as a filter condition.
covering index:
your query is not complicated, so it should be easy to create a covering index
for it. By covering index I mean a structure which will contain all the columns
which are mentioned in your query, it will help the RDBMS engine to decide to
use it during execution becaue it will contain all the needed columns, and the
filter also, as mentioned in the first point.
So the syntax could look like this (Microsoft SQL Server pseudo code):
CREATE INDEX idx1 ON your_table_name (sample_id) WHERE sample_id LIKE '%*%'
If you would build it, you would have a DEDICATED structure for your query. You can think of it as of a subset of the data from your table, but physicaly present in your database, written to disk and being constantly updated as the data changes. As long as this index has the filter, it contains only the rows needed by your query. So you can imagine that if the RDBMS engine would choose it - by parsing and analyzing your code - the WHERE clause would not have to execute.
Unfortunatelly, I am not aware if other RDBMSes than Microsoft SQL Server deliver filtered indexes.
If your RDBMS doesn't allow for filtered indexes you can at least create a covering one. Still it might be lighter structure than your table, however, you didn't present the structure of your table.
An index doesn't come without a cost but this is a further story. Just remember that it takes place on disk and is being updated along with the data in your table.

How to use index in SQL query

Well i am new to this stuff ..I have created an index in my SP at start like follows
Create Index index_fab
ON TblFab (Fab_name)
Now i have query under this
select fab_name from TblFab where artc = 'x' and atelr = 'y'.
now Is it necessary to use this index name in select clause or it will automatically used to speed up queries
Do i have to use something like
select fab_name from TblFab WITH(INDEX(index_fab)) where artc = 'x' and atelr = 'y'.
or any other method to use this index in query
and also how to use index if we are using join on this table?
Firstly, do you mean you're creating the index in a stored procedure? That's a bad idea - if you run the stored procedure twice, it will fail because the index already exists.
Secondly, your query doesn't use the column mentioned in the index, so it will have no impact.
Thirdly, as JodyT writes, the query analyzer (SQL Server itself) will decide which index to use; it's almost certainly better at it than you are.
Finally, to speed up the query you mention, create an index on columns artc and atelr.
The Query Optimizer of SQL Server will decide if it the index is suitable for the query. You can't force it to use a specific index. You can give hints on which you want it to use but it won't be a guarantee that it will use it.
As the other people answered your question to help you to understand better, my opinion is, you should first understand why you need to use indexes. As we know that indexes increase the performance , they could also cause performance issues as well. Its better to know when you need to use indexes, why you need to use indexes instead of how to use indexes.
You can read almost every little detail from here .
Regarding your example, your query's index has no impact. Because it doesn't have the mentioned column in your query's where clause.
You can also try:
CREATE INDEX yourIndexName
ON yourTableName (column_you_are_looking_for1,column_you_are_lookingfor2)
Also good to know: If no index exists on a table, a table scan must be performed for each table referenced in a database query. The larger the table, the longer a table scan takes because a table scan requires each table row to be accessed sequentially. Although a table scan might be more efficient for a complex query that requires most of the rows in a table, for a query that returns only some table rows an index scan can access table rows more efficiently. (source from here )
Hope this helps.
An index should be used by default if you run a query against the table using it.
But I think in the query you posted it will not be used, because you are not filtering your data by the column you created your index on.
I think you would have to create the index for the artc and atelr columns to profit from that.
To see wether your index is used take a look at the execution plan that was used in the SQL Management Studio.
more info on indices: use the index luke
You dont need to include index in your query. Its managed by sql server. Also you dont need to include index in select if you want to make join to this table. Hope its clear.
You're index use "Fab_name" column which you don't filter on in your select statement, so it's of no use.
Since you're new to this, you might benefit from an index like this :
Create Index index_fab
ON TblFab (artc, atelr)
or maybe like this
Create Index index_fab
ON TblFab (atelr, artc)
...yes there are a lot of subtleties to learn.
For better performance:
List out the columns /tables which are frequently used,
Create index on those tables/columns only.
If index is properly set up, optimizer will use it automatically. By properly set up, I mean that it's selective enough, can effectively help the query etc. Read about it. You can check by yourself if index is being used by using "include actual execution plan" option in ssms.
It's generally not advised to use with(index()) hints and let optimizer decided by itself, except from very special cases when you just know better ;).

How can I improve this endless query?

I've got a table with close to 5kk rows. Each one of them has one text column where I store my XML logs
I am trying to find out if there's some log having
<node>value</node>
I've tried with
SELECT top 1 id_log FROM Table_Log WHERE log_text LIKE '%<node>value</node>%'
but it never finishes.
Is there any way to improve this search?
PS: I can't drop any log
A wildcarded query such as '%<node>value</node>%' will result in a full table scan (ignoring indexes) as it can't determine where within the field it'll find the match. The only real way I know of to improve this query as it stands (without things like partitioning the table etc which should be considered if the table is logging constantly) would be to add a Full-Text catalog & index to the table in order to provide a more efficient search over that field.
Here is a good reference that should walk you through it. Once this has been completed you can use things like the CONTAINS and FREETEXT operators that are optimised for this type of retrieval.
Apart from implementing full-text search on that column and indexing the table, maybe you can narrow the results by another parameters (date, etc).
Also, you could add a table field (varchar type) called "Tags" which you can populate when inserting a row. This field would register "keywords, tags" for this log. This way, you could change your query with this field as condition.
Unfortunately, about the only way I can see to optimize that is to implement full-text search on that column, but even that will be hard to construct to where it only returns a particular value within a particular element.
I'm currently doing some work where I'm also storing XML within one of the columns. But I'm assuming any queries needed on that data will take a long time, which is okay for our needs.
Another option has to do with storing the data in a binary column, and then SQL Server has options for specifying what type of document is stored in that field. This allows you to, for example, implement more meaningful full-text searching on that field. But it's hard for me to imagine this will efficiently do what you are asking for.
You are using a like query.
No index involved = no good
There is nothing you can do with what you have currently to speed this up unfortunately.
I don't think it will help but try using the FAST x query hint like so:
SELECT id_log
FROM Table_Log
WHERE log_text LIKE '%<node>value</node>%'
OPTION(FAST 1)
This should optimise the query to return the first row.

SQL `LIKE` complexity

Does anyone know what the complexity is for the SQL LIKE operator for the most popular databases?
Let's consider the three core cases separately. This discussion is MySQL-specific, but might also apply to other DBMS due to the fact that indexes are typically implemented in a similar manner.
LIKE 'foo%' is quick if run on an indexed column. MySQL indexes are a variation of B-trees, so when performing this query it can simply descend the tree to the node corresponding to foo, or the first node with that prefix, and traverse the tree forward. All of this is very efficient.
LIKE '%foo' can't be accelerated by indexes and will result in a full table scan. If you have other criterias that can by executed using indices, it will only scan the the rows that remain after the initial filtering.
There's a trick though: If you need to do suffix matching - searching for file names with extension .foo, for instance - you can achieve the same performance by adding a column with the same contents as the original one but with the characters in reverse order.
ALTER TABLE my_table ADD COLUMN col_reverse VARCHAR (256) NOT NULL;
ALTER TABLE my_table ADD INDEX idx_col_reverse (col_reverse);
UPDATE my_table SET col_reverse = REVERSE(col);
Searching for rows with col ending in .foo then becomes:
SELECT * FROM my_table WHERE col_reverse LIKE 'oof.%'
Finally, there's LIKE '%foo%', for which there are no shortcuts. If there are no other limiting criterias which reduces the amount of rows to a feasible number, it'll cause a hard performance hit. You might want to consider a full text search solution instead, or some other specialized solution.
If you are asking about the performance impact:
The problem of like is that it keeps the database from using an index. On Oracle I think it doesn't use indexes anymore (but I'm still on Oracle 9). SqlServer uses indexes if the wildcard is only at the end. I don't know about other databases.
Depends on the RDBMS, the data (and possibly size of data), indexes and how the LIKE is used (with or without prefix wildcard)!
You are asking too general a question.

Do indexes work with "IN" clause

If I have a query like:
Select EmployeeId
From Employee
Where EmployeeTypeId IN (1,2,3)
and I have an index on the EmployeeTypeId field, does SQL server still use that index?
Yeah, that's right. If your Employee table has 10,000 records, and only 5 records have EmployeeTypeId in (1,2,3), then it will most likely use the index to fetch the records. However, if it finds that 9,000 records have the EmployeeTypeId in (1,2,3), then it would most likely just do a table scan to get the corresponding EmployeeIds, as it's faster just to run through the whole table than to go to each branch of the index tree and look at the records individually.
SQL Server does a lot of stuff to try and optimize how the queries run. However, sometimes it doesn't get the right answer. If you know that SQL Server isn't using the index, by looking at the execution plan in query analyzer, you can tell the query engine to use a specific index with the following change to your query.
SELECT EmployeeId FROM Employee WITH (Index(Index_EmployeeTypeId )) WHERE EmployeeTypeId IN (1,2,3)
Assuming the index you have on the EmployeeTypeId field is named Index_EmployeeTypeId.
Usually it would, unless the IN clause covers too much of the table, and then it will do a table scan. Best way to find out in your specific case would be to run it in the query analyzer, and check out the execution plan.
Unless technology has improved in ways I can't imagine of late, the "IN" query shown will produce a result that's effectively the OR-ing of three result sets, one for each of the values in the "IN" list. The IN clause becomes an equality condition for each of the list and will use an index if appropriate. In the case of unique IDs and a large enough table then I'd expect the optimiser to use an index.
If the items in the list were to be non-unique however, and I guess in the example that a "TypeId" is a foreign key, then I'm more interested in the distribution. I'm wondering if the optimiser will check the stats for each value in the list? Say it checks the first value and finds it's in 20% of the rows (of a large enough table to matter). It'll probably table scan. But will the same query plan be used for the other two, even if they're unique?
It's probably moot - something like an Employee table is likely to be small enough that it will stay cached in memory and you probably wouldn't notice a difference between that and indexed retrieval anyway.
And lastly, while I'm preaching, beware the query in the IN clause: it's often a quick way to get something working and (for me at least) can be a good way to express the requirement, but it's almost always better restated as a join. Your optimiser may be smart enough to spot this, but then again it may not. If you don't currently performance-check against production data volumes, do so - in these days of cost-based optimisation you can't be certain of the query plan until you have a full load and representative statistics. If you can't, then be prepared for surprises in production...
So there's the potential for an "IN" clause to run a table scan, but the optimizer will
try and work out the best way to deal with it?
Whether an index is used doesn't so much vary on the type of query as much of the type and distribution of data in the table(s), how up-to-date your table statistics are, and the actual datatype of the column.
The other posters are correct that an index will be used over a table scan if:
The query won't access more than a certain percent of the rows indexed (say ~10% but should vary between DBMS's).
Alternatively, if there are a lot of rows, but relatively few unique values in the column, it also may be faster to do a table scan.
The other variable that might not be that obvious is making sure that the datatypes of the values being compared are the same. In PostgreSQL, I don't think that indexes will be used if you're filtering on a float but your column is made up of ints. There are also some operators that don't support index use (again, in PostgreSQL, the ILIKE operator is like this).
As noted though, always check the query analyser when in doubt and your DBMS's documentation is your friend.
#Mike: Thanks for the detailed analysis. There are definately some interesting points you make there. The example I posted is somewhat trivial but the basis of the question came from using NHibernate.
With NHibernate, you can write a clause like this:
int[] employeeIds = new int[]{1, 5, 23463, 32523};
NHibernateSession.CreateCriteria(typeof(Employee))
.Add(Restrictions.InG("EmployeeId",employeeIds))
NHibernate then generates a query which looks like
select * from employee where employeeid in (1, 5, 23463, 32523)
So as you and others have pointed out, it looks like there are going to be times where an index will be used or a table scan will happen, but you can't really determine that until runtime.
Select EmployeeId From Employee USE(INDEX(EmployeeTypeId))
This query will search using the index you have created. It works for me. Please do a try..