How to ignore 00 (two leading zeros) in Select query? - sql

I am not sure whether it is possible or not, I have one DB table which is having fields refNumber, Some of this fields values contains two leading zeros, following is example.
id.
refNumber
10001
123
10002
00456
Now I am trying to write a query which can select from this table with our without leading zeros (Only two not less or greater than two).Here is an example, for select refNumber=123 OR refNumber=00123 should return result 10001 and for refNumber=00456 OR refNumber=456 should return result of 10002. I can not use like operator because in that case other records might also be return. Is it possible through the query? if not what would be the right way to select such records? I am avoiding looping the all rows in my application.

You need to apply TRIM function on both - column and the value you want to filter by:
SELECT * FROM MyTable
WHERE TRIM(LEADING '0' FROM refNumber) = TRIM(LEADING '0' FROM '00123') -- here you put your desired ref number

Use trim()
Select * from table where trim(refnumber) IN ('123','456')
Or replace() whichever supported
Select * from table where
replace(refnumber, '0','') IN
('123','456')

While the currently accepted answer would work, be aware that at best it would cause Db2 to do a full index scan and at worst could result in a full table scan.
Not a particularly efficient way to return 1 or 2 records out of perhaps millions. This happens anytime you use an expression over a table column in the WHERE clause.
If you know there's only ever going to be 5 digits or less , a better solution would be something that does the following:
SELECT * FROM MyTable
WHERE refNumber in ('00123','123')
That assumes you can build the two possibilities outside the query.
If you really want to have the query deal with the two possibilities..
SELECT * FROM MyTable
WHERE refNumber in (LPAD(:value,5,'0'),LTRIM(:value, '0'))
If '00123' or '123' is pass in as value, the above query would find records with '00123' or '123' in refNumber.
And assuming you have an index on refNumber, do so quickly and efficiently.
If there could be an unknown number of lead zeros, then you are stuck with
SELECT * FROM MyTable
WHERE LTRIM(refNumber,'0') = LTRIM(:value, '0')
However, if you platform/version of Db2 supports indexes over an expression you'd want to create one for efficiency's sake
create index myidx
on MyTable (LTRIM('0' from refNumber))

Related

Combining 2 different value into 1 in SQL

First of all, I apologize for not being able to show my question with code. I have 2 tables. I'm merging these tables and returning a result. My values ​​include values ​​that are the same but written with different letters ( for example, INSTALL and INSTALL). These two values ​​are essentially the same. but it returns 2 different results because they are written with different letters. what I want is to convert the İNSTALL value to the INSTALL value and increase the total INSTALL value to 5. Any idea?
Column1
Values
INSTALL
2
İNSTALL
3
Use UPPER() function available in different DBMS.
select UPPER(Column1),sum(values) from Table group by UPPER(Column1)
UPPER: https://www.w3schools.com/sql/func_sqlserver_upper.asp
Depending on the anticipated overlap of letters, you could go about this a couple of ways.
Assuming the headers are the same across both tables, and only the cases are different (if they're not, use aliases in the CTE):
SELECT
LOWER(column_1) AS column_1,
SUM(Values) AS values_total
FROM (SELECT * FROM table_1 UNION SELECT * FROM table_2) combined_tables
GROUP BY 1
If there is different spelling across both tables, you could use a lower case cast of only the rightmost or leftmost letters.
SELECT
LOWER(RIGHT(column_1),3) AS column_1,
SUM(Values) AS values_total
FROM (SELECT * FROM table_1 UNION SELECT * FROM table_2) combined_tables
GROUP BY 1
If you expect a mix of misspellings, non-text characters, I'd suggest using regex to do fuzzy matching on the column values.

Oracle : how to find occurrence of specified value in particular column of a table in oracle

I have a table with one of the columns is check and this column has only YES,NO and NA as data
When a query is executed, if the Check column has at least one Yes value then the YES value should be fetched otherwise NO value should be fetched and it if no data found for this column then the fetched value should be NA
Could someone help me out?
I am not sure I am answering the right question - but I hope this query example would help:
Select nvl(MAX(col),'NA')
from (select id, col from table
where id = 1
union all
select 1 id, null from dual)
group by id
(that would work with YES and NO since 'YES'>'NO' so the priority will be given to 'YES', and then to 'NO' - in the case that there are no results - the nvl will make sure you get NA.
For the case no result exists - the "select id, null from dual will make sure you'll get results).
There are many other ways you can write this query (including analytical functions, and creating an aggregation function, which is what you really want...) - but this is the simplest... (make sure all the col values are only 'YES' and 'NO' by constraint - otherwise the correctness will be harmed..)
Now I am not sure if it answers your question, but if I understand your need correctly, the ideas shown here should be applicable to your requirement.
This should also be pretty efficient, assuming there is an index on the rows you're querying.
Sample Solution:
WITH tbl_values as (
SELECT distinct mt.check_value
FROM my_table mt
ORDER BY decode(mt.check_value,'YES',1,'NO',2,3) ASC)
SELECT tv.check
FROM tbl_values tv
WHERE rownum = 1;
COMMENTS: The DECODE statement is the key enforcement of the rules stated in the OP.

comparing rows in Sql, without using distinct operator? (distinct operator implementation)

I want to compare twos rows from a query result, for instance, if 1st row is equal to 2nd Row.
Given a query of the form
SELECT * FROM table_name
if the query results 100 rows, then how do we compare each rows for equality. just i am curious about the sql server how it will implement. basically implementation of Distinct operator. just want to know the how the SQL server will implement in behind the process. as it will help to understand the concept more in clearer way.
Simplest way the sql server may use - to compare hashes of whole rows:
SELECT CHECKSUM(*)
from YourTable
or choosen columns
SELECT CHECKSUM(col1, col2, col3)
from YourTable
and if checksums differ - the rows are differ, but if checksum match - it need to check more carefully over exact values of columns, but it will be more or less easier to filter out the results which checksums is not match.
To check the candidates to duplicates:
SELECT CHECKSUM(*)
from YourTable
GROUP BY CHECKSUM(*)
HAVING COUNT(*) > 1
You could use the following query:
SELECT *
FROM table_name
GROUP BY col1,col2,... -- all columns to test for equality here
HAVING COUNT(*)>1
In the GROUP BY you put the name of every column you want to be equal. If you want entire rows to be equal, put down the name of every column in the table there.
No matter what, your table "in a relational database" will have a primary key that will be used in other tables.
Because of this, your rows 1-100 will all be unique because of that key.
However, if you are trying to compare specific columns, you will need to build a function similar to this:
$temp;
$i=0;
$stmt = $mysqli->prepare("SELECT id, name FROM users");
$stmt->execute();
$stmt->bind_result($id, $name);
while($stmt->fetch()){
if($temp!=$name){
$temp=$name;
$saveIDs[$i]=$id;
}
$i++;
}

distinct values from multiple fields within one table ORACLE SQL

How can I get distinct values from multiple fields within one table with just one request.
Option 1
SELECT WM_CONCAT(DISTINCT(FIELD1)) FIELD1S,WM_CONCAT(DISTINCT(FIELD2)) FIELD2S,..FIELD10S
FROM TABLE;
WM_CONCAT is LIMITED
Option 2
select DISTINCT(FIELD1) FIELDVALUE, 'FIELD1' FIELDNAME
FROM TABLE
UNION
select DISTINCT(FIELD2) FIELDVALUE, 'FIELD2' FIELDNAME
FROM TABLE
... FIELD 10
is just too slow
if you were scanning a small range in the data (not full scanning the whole table) you could use WITH to optimise your query
e.g:
WITH a AS
(SELECT field1,field2,field3..... FROM TABLE WHERE condition)
SELECT field1 FROM a
UNION
SELECT field2 FROM a
UNION
SELECT field3 FROM a
.....etc
For my problem, I had
WL1 ... WL2 ... correlation
A B 0.8
B A 0.8
A C 0.9
C A 0.9
how to eliminate the symmetry from this table?
select WL1, WL2,correlation from
table
where least(WL1,WL2)||greatest(WL1,WL2) = WL1||WL2
order by WL1
this gives
WL1 ... WL2 ... correlation
A B 0.8
A C 0.9
:)
The best option in the SQL is the UNION, though you may be able to save some performance by taking out the distinct keywords:
select FIELD1 FROM TABLE
UNION
select FIELD2 FROM TABLE
UNION provides the unique set from two tables, so distinct is redundant in this case. There simply isn't any way to write this query differently to make it perform faster. There's no magic formula that makes searching 200,000+ rows faster. It's got to search every row of the table twice and sort for uniqueness, which is exactly what UNION will do.
The only way you can make it faster is to create separate indexes on the two fields (maybe) or pare down the set of data that you're searching across.
Alternatively, if you're doing this a lot and adding new fields rarely, you could use a materialized view to store the result and only refresh it periodically.
Incidentally, your second query doesn't appear to do what you want it to. Distinct always applies to all of the columns in the select section, so your constants with the field names will cause the query to always return separate rows for the two columns.
I've come up with another method that, experimentally, seems to be a little faster. In affect, this allows us to trade one full-table scan for a Cartesian join. In most cases, I would still opt to use the union as it's much more obvious what the query is doing.
SELECT DISTINCT CASE lvl WHEN 1 THEN field1 ELSE field2 END
FROM table
CROSS JOIN (SELECT LEVEL lvl
FROM DUAL
CONNECT BY LEVEL <= 2);
It's also worthwhile to add that I tested both queries on a table without useful indexes containing 800,000 rows and it took roughly 45 seconds (returning 145,000 rows). However, most of that time was spent actually fetching the records, not running the query (the query took 3-7 seconds). If you're getting a sizable number of rows back, it may simply be the number of rows that is causing the performance issue you're seeing.
When you get distinct values from multiple columns, then it won't return a data table. If you think following data
Column A Column B
10 50
30 50
10 50
when you get the distinct it will be 2 rows from first column and 1 rows from 2nd column. It simply won't work.
And something like this?
SELECT 'FIELD1',FIELD1, 'FIELD2',FIELD2,...
FROM TABLE
GROUP BY FIELD1,FIELD2,...

orderby in sql query

I need to order sql query by a column (the three different values in this column are C,E,T).
I want the results in order of E,C,T. So, of course I can't use ascending or descending orderby on this column.
Any suggestions how can I do this? I don't know if that matters or not but I am using sybase data server on tomcat.
You could do it by putting a conditional in your select clause. i'm not Sybase guy but it might look something like this:
SELECT col, if col = 'E' then 1 else if col = 'C' then 2 else 3 end AS sort_col
FROM some_table
ORDER BY sort_col
If your AS alias doesn't work you could sort by column 1-based index like this:
ORDER BY 2
The other methods work, but this is an often overlooked trick (in MSSQL, I'm not positive if it works in Sybase or not):
select
foo,
bar
from
bortz
order by
case foo
when 'E' then 1
when 'C' then 2
when 'T' then 3
else 4
end
You could use a per-row function to change the columns as other answers have stated but, if you expect this database to scale well, per-row functions are rarely a good idea.
Feel free to ignore this advice if your table is likely to remain small.
The advice here works because of a few general "facts" (I enclose that in quotation marks since it's not always the case but, in my experience, it mostly is):
The vast majority of databases are read far more often than they're written. That means it's usually a good idea to move the cost of calculation to the write phase rather than the read phase.
Most problems with database tend to be the "my query is slow" type rather than the "there's not enough disk space" type.
Tables always grow bigger than you thought they would :-)
If your situation is matched by those "facts", it makes sense to sacrifice a little disk space in order to speed up your queries. It's also better to incur the cost of calculation only when necessary (insert/update), not when the data hasn't actually changed (select).
To do that, you would create a new column (ect_col_sorted for example) in the table which would hold a numeric sort value (or more than one column if you want different soert orders).
The have an insert/update trigger so that, whenever a row is added to, or changed in, the table, you populate the sort field with the correct value (E = 1, C = 2, T = 3, anything else = 0). Then put an index on that column and your query becomes a much simpler (and faster):
select ect_col, other_col_1, other_col_2
from ect_table
order by ect_col_sorted;
Idea is to add subquery with condition that will return your data row plus fictive value which will be 0 if there is E, 1 for E and 2 for T. Then simply order it by this column.
Hope it helps.
psasik's solution will work, as will this one (which to use and which is faster depends on what else is going on in the query):
select *
from some_table
where col = 'E'
UNION ALL
select *
from some_table
where col = 'C'
UNION ALL
select *
from some_table
where col = 'E'
that should work, but you can also do this which will be "safer" for large dataset which may be paged...
select *, 1 as o
from some_table
where col = 'E'
UNION ALL
select *, 2 as o
from some_table
where col = 'C'
UNION ALL
select *, 3 as o
from some_table
where col = 'E'
ORDER BY o
After I wrote the above I decided this is the best solution (note, I do not know if this will work on a sybase server as I don't have access to one right now but if it does not work on there just pull the creation of the keysort memory table out to a variable or temporary table -- which ever sybase supports)
;WITH keysort (k,o) AS
(
SELECT 'E',0
UNION ALL
SELECT 'C',1
UNION ALL
SELECT 'E',2
)
SELECT *
FROM some_table
LEFT JOIN keysort ON some_table.col = keysort.k
ORDER BY keysort.o
This should be the fastest of all choices -- uses in memory table to exploit sql's optimized joining.
You can even go about using Field() function.
Order by Field(columnname, E, C, T)
Hope this helps you