use string value like numeric value in Oracle - sql

I have a condition in my oracle query:
AND a.ACCARDACNT > '0880080200000006' and a.ACCARDACNT < '0880080200001000'
type of ACCARDACNT column in table is varchar and indexed but in that condition I want to use it as number. When I execute this query, the execution plan shows that CBO can use index and scan the table by index.
is it true?
I want to use and compare them as number and also an indexed be used. Is there any solution?

If it is guaranteed that all ACCARDACNT are numbers, then just use
and to_number(a.accardacnt) > 880080200000006 and a.accardacnt < 880080200001000;
This makes sure that the numbers are no compared as strings where '2' > '10', because looking at the first characters '2' is greater than '1'.
(In case of decimal numbers, make sure that the the decimal separator stored in the strings matches the current session settings.)
If you want to provide an index for this, use this function index:
create index idx_accardacnt on mytable( to_number(accardacnt) );
or a composite index containing to_number(accardacnt). As the execution plan for the strings query showed an index to be used, the same should be true for the numeric comparision and function index. (Remember a DBMS is free to use the provided indexes or not. We are simply offering them, but the DBMS knows best whether it makes sense to use them in a query or not.)

Think you cannot use numeric comparison and index together.
the execution plan shows that CBO can use index
There is a chance that Full Index Scan is used here, so it just a table scan but with less columns.
Possible approach is to convert numbers to fixed length strings with leading zeros and then use ones in comparison. In this case the index will be used.

Your current query should be able to use an index. But the problem is that you are comaparing text but expecting it to sort numerically. It may not, in general, because text sorts lexicographically in SQL (i.e. in dictionary order). So, to get the correct sorting behavior you will have to cast ACCARDACNT to a number:
AND CAST(LTRIM(a.ACCARDACNT, '0') AS FLOAT) BETWEEN 880080200000007 AND 880080200000999

Another option still would be a computed column:
alter table mytable add accardacnt_num as (to_number(accardacnt));
and provide one or more indexes containing this column:
create index idx_accardacnt_num on mytable(accardacnt_num);
Then existing code continues working, but new queries can benefit from the numeric column:
and a.accardacnt_num > 880080200000006 and a.accardacnt_num < 880080200001000;

I think the following logic does what you might really want:
a.ACCARDACNT > '0880080200000006' and
a.ACCARDACNT < '0880080200001000' and
length(ACCARDACNT) = 16
In addition, this can use an index on the column, if an appropriate one is available.
This would not be correct if you wanted the 15-character account number '880080200000060' to match your criteria. My guess is that you do not want this.

You have to create a function index on column : accardacnt and
create index idx_fn_accardant on table_name( to_number(accardacnt) );
convert it into number in the where clause of the query query :
where to_number(ACCARDACNT) > 0880080200000006 and to_number(ACCARDACNT) < 0880080200001000

Related

Oracle CONTAINS() not returning results for numbers

So I have this table with a full text indexed column "value". The field contains some number strings in it, which are correctly returned when i use a query like so:
select value
from mytable
where value like '17946234';
Trouble is, it's incredibly slow because there are a lot of rows in this table, but when i use the CONTAINS operator I get no results:
select value
from mytable
where CONTAINS ( value, '17946234',1)>0
Anyone got any thoughts?
Unfortunately, I'm not an Oracle dude, and the same query works fine in SQL Server. I feel like it must be a stoplist or something with the Oracle Lexer that I can change, but not really sure how.
This could be due to INDEXTYPE IS CTXSYS.CONTEXT in general, or the index not having been updated after the looked for records where added (CONTEXT type indexes are not transactional, whilst CTXCAT type ones are).
However, if you did not accidentally lose the wildcard in your statement (In other words: If no wildcard is required indeed.), you could just query
select value from mytable where value = '17946234';
which could possibly be backed by an ordinary index. (Depending on your specific data distribution and the queries run, an index might not help query performance.)
Otherwise
select value from mytable where instr(value, '17946234') > 0;
might be sufficient.

sqlserver index on numeric fields for zero non zero values

I have a table with a numeric field like this:
table invoices (
inv_number varchar(20),
amount numeric(13,2),
amount_remaining numeric(13,2) )
and I often have to query this table to extract records with zero values or non zero values.
select invnumber from invoices
where amount_remaining=0
... or ...
where amount_remaining!=0
I know the best soulution would be to add a precalculated boolean field,
but I can't touch the table structure (can't add fields).
I can only work on indexes or views.
Is it a good idea to add an index on this numeric field just to filter for zero / non-zero values?
Any suggestions?
i know the best soulution would be to add a precalculated boolean field
Not at all. That would help nothing.
select invnumber from invoices
where amount_remaining=0
... or ...
where amount_remaining!=0
This query is non-sargable. Any possible optimization would depend entirely on what else in in the WHERE clause. Unless a sargable expression exists there the query requires a full scan so there cannot be any discussion about optimizing it.
is it a good idea to add an index on this numeric field just to filter for zero non-zero values?
No. You also have the condition where amount_remaining=0 ... so the index would not help.
Ultimately you have non-sensical condition. You want records that satisfy a predicate or the negated predicate. That will always be, by definition, everything (not considering three valued NULL logic). You ask whether a filtered index would help. A filtered index would help one of the two parts of your WHERE clause, but never both.
select invnumber from invoices
where amount_remaining=0 and ...;
this would be helped by a filtered index with a amount_remaining=0 if the percentage of records where ammount_remaning=0 is small.
or
select invnumber from invoices
where amount_remaining!=0 and ...;
this would be helped by a filtered index with a amount_remaining!=0 if the percentage of records where ammount_remaning!=0 is small.
Obviously you can't have both criteria be a 'small percent' at once.
If you look for a good index you'll have to search in some other parts of the WHERE clause, some parts that do not express, simulatenously, a predicate and its negation OR-ed together.
index is not used for filter, yes view will use for this. You can create a view with as your condition which filter the data.
Index is used for fast searching, the same as we use index in a book.
Update
Yes you can give non-clustered index on above column which give faster result.
CREATE NONCLUSTERED INDEX IX_Tablename_ColumnName
ON Databasename.Tablename (columnname);
http://msdn.microsoft.com/en-IN/library/ms189280.aspx

How to create an index for a string column in sql?

I have a table with 3 columns: a list id, name and numeric value.
The goal is to use the table to retrieve and update the numeric value for a name in various lists.
The problem is that sql refuses to create an index with the name column because it's a string column with variable length.
Without an index selecting with a name will be inefficient, and the option of using a static length text column will be wasting a lot of storage space as names can be fairly long.
What is the best way to build this table and it's indexes?
(running sql server 2008)
If your string is longer than 900 bytes, then it can't be an index key, regardless of whether it is variable or fixed length.
One idea would be to at least make seeks more selective by adding a computed column. e.g.
CREATE TABLE dbo.Strings
(
-- other columns,
WholeString VARCHAR(4000),
Substring AS (CONVERT(VARCHAR(10), WholeString) PERSISTED
);
CREATE INDEX ss ON dbo.Strings(Substring);
Now when searching for a row to update, you can say:
WHERE s.Substring = LEFT(#string, 10)
AND s.WholeString = #string;
This will at least help the optimizer narrow its search down to the index pages where the exact match is most likely to live. You may want to experiment with that length as it depends on how many similar strings you have and what will best help the optimizer weed out a single page. You may also want to experiment with including some or all of the other columns in the ss index, with or without using the INCLUDE clause (whether this is useful will vary greatly on various factors such as what else your update query does, read/write ratio, etc).
A regular index can't be created on ntext or text columns (i guess your name column is of that type, or (n)varchar longer than 900 bytes). You can create full-text index on that column type.

Does indexes work with group function in oracle?

I am running following query.
SELECT Table_1.Field_1,
Table_1.Field_2,
SUM(Table_1.Field_5) BALANCE_AMOUNT
FROM Table_1, Table_2
WHERE Table_1.Field_3 NOT IN (1, 3)
AND Table_2.Field_2 <> 2
AND Table_2.Field_3 = 'Y'
AND Table_1.Field_1 = Table_2.Field_1
AND Table_1.Field_4 = '31-oct-2011'
GROUP BY Table_1.Field_1, Table_1.Field_2;
I have created index for columns (Field_1,Field_2,Field_3,Field_4) of Table_1 but the index is not getting used.
If I remove the SUM(Table_1.Field_5) from select clause then index is getting used.
I am confused if optimizer is not using this index or its because of SUM() function I have used in query.
Please share your explaination on the same.
When you remove the SUM you also remove field_5 from the query. All the data needed to answer the query can then be found in the index, which may be quicker than scanning the table. If you added field_5 to the index the query with SUM might use the index.
If your query is returning the large percentage of table's rows, Oracle may decide that doing a full table scan is cheaper than "hopping" between the index and the table's heap (to get the values in Table_1.Field_5).
Try adding Table_1.Field_5 to the index (thus covering the whole query with the index) and see if this helps.
See the Index-Only Scan: Avoiding Table Access at Use The Index Luke for conceptual explanation of what is going on.
As you mentioned, the presence of the summation function results in the the Index being overlooked.
There are function based indexes:
A function-based index includes columns that are either transformed by a function, such as the UPPER function, or included in an expression, such as col1 + col2.
Defining a function-based index on the transformed column or expression allows that data to be returned using the index when that function or expression is used in a WHERE clause or an ORDER BY clause. Therefore, a function-based index can be beneficial when frequently-executed SQL statements include transformed columns, or columns in expressions, in a WHERE or ORDER BY clause.
However, as with all, function based indexes have their restrictions:
Expressions in a function-based index cannot contain any aggregate functions. The expressions must reference only columns in a row in the table.
Though I see some good answers here couple of important points are being missed -
SELECT Table_1.Field_1,
Table_1.Field_2,
SUM(Table_1.Field_5) BALANCE_AMOUNT
FROM Table_1, Table_2
WHERE Table_1.Field_3 NOT IN (1, 3)
AND Table_2.Field_2 <> 2
AND Table_2.Field_3 = 'Y'
AND Table_1.Field_1 = Table_2.Field_1
AND Table_1.Field_4 = '31-oct-2011'
GROUP BY Table_1.Field_1, Table_1.Field_2;
Saying that having SUM(Table_1.Field_5) in select clause causes index not to be used in not correct. Your index on (Field_1,Field_2,Field_3,Field_4) can still be used. But there are problems with your index and sql query.
Since your index is only on (Field_1,Field_2,Field_3,Field_4) even if your index gets used DB will have to access the actual table row to fetch Field_5 for applying filter. Now it completely depends on the execution plan charted out of sql optimizer which one is cost effective. If SQL optimizer figures out that full table scan has less cost than using index it will ignore the index. Saying so I will now tell you probable problems with your index -
As others have states you could simply add Field_5 to the index so that there is no need for separate table access.
Your order of index matters very much for performance. For eg. in your case if you give order as (Field_4,Field_1,Field_2,Field_3) then it will be quicker since you have equality on Field_4 -Table_1.Field_4 = '31-oct-2011'. Think of it this was -
Table_1.Field_4 = '31-oct-2011' will give you less options to choose final result from then Table_1.Field_3 NOT IN (1, 3). Things might change since you are doing a join. It's always best to see the execution plan and design your index/sql accordingly.

MySQL MyISAM indexing a text column

I have a slow performing query on my table. It has a where clause such as:
where supplier= 'Microsoft'
The column type is text. In phpmyadmin I looked to see if I could add an index to the table but the option is disabled. Does this mean that you can not index a text column? Does this mean that every update query like this is performing a full table scan?
Would then the best thing to do is separate the column into it's own table and place an ID in the current table then place an index on that? Would this potentially speed up the query?
You need to add a prefix length to the index. Have a look at Column Indexes docs
The following creates an index on the first 50 bytes of supplier field:
mysql> create index supplier_ix on t(supplier(50));
Query OK, 0 rows affected (0.03 sec)
Records: 0 Duplicates: 0 Warnings: 0
But maybe you should rethink the datatype of supplier? Judging from the name, it doesn't sound like a typical text field...
You should do a check on
select max(length(supplier)) from the_table;
If the length is less than 255, you can (and you should) convert it to varchar(255) and built an index on it
Choosing a right data type is more important.
If the length is long, built an index on limited length will help.
Did I understand you right? It's a TEXT column? As in the type that corresponds to BLOB? Might I advise considering a VARCHAR for this column?