Validate exist mobile number as per starting series - sql

I have mobile number more than 50K, and I need to validate all number, whether these are valid number or not as per Indian mobile number series. I have downloaded Indian mobile number series from Wikipedia, and storing these in column named Series in another table. Now I want to validate all number in one go, please provide any standard query which is faster, and in best plan execution.
For example series is: 6000,6001,6002,9977,9947
Below is mobile number: 1241124154,6011101101,8414141401,6014141410,9947256585
Please note that above number is randomly entered, these are not related to the number I have in my record. Any resemblance/existence of this number will be just coincidence.

Given that you can determine valid phone matches using known prefixes of the numbers, you should be able to just index the phone number column and then run something like:
SELECT *
FROM yourTable
WHERE
phone LIKE '6000%' OR
phone LIKE '6002%' OR
phone LIKE '9977%' OR
phone LIKE '9947%';
If you have many possible phone prefixes to check, then I suggest the following approach. First, create a new column based on the phone number which contains only the prefix. You may do this in your current table, or you may create a temporary table if you don't want to/can't change your current schema. Next, create a new table which contains just a single column. Populate this table with your 4000 actual valid phone prefixes, and then index this phone column. Now, the following query should be very fast:
SELECT t1.phone
FROM yourTable t1
WHERE EXISTS (SELECT 1 FROM prefixes t2 WHERE t2.prefix = t1.prefix);
Your SQL database should be able to use the index to satisfy the WHERE clause, and make the query execute quickly.

use some thing like this, If you have mobile master table name tblmobile and series in tblmobileseries table then below is a sample query for your solution
SELECT mobileno,
(case when exists(select tm.mobileno
from tblmobile tm, tblmobileseries tms
where tm.mobileno Like tms.series + '%')
then 'VALID'
else 'Not Valid' end
) as ISValid
from tblmobileno

Select M.Mobile,
Case When Exists (Select 1 from SeriesT T where Left(M.Mobile,4)=T.Series)
Then 'Valid'
Else 'Invalid'
End as Result
From DM M
DM- Table for mobile number
SeriesT- Table for mobile number series

Related

Create column name based on value without execute

I need to create a column name based on the value of other columns. I need to return a value from a column, but the specific name depends on the value insert on other table.
From intance:
Table A
Column1 | Column2
1 2
Base on that values I need to go to the table B to the column "VE12".
I need this dynamiclly, so the execute(#query) is my last option and I would like to avoid CASE WHEN statments because I have more than 50 options.
My query will be something like:
select case when fn.tab=8 and fo.pais=3 then cp.ve83 end
FROM fn
INNER JOIN fo ON fo.stamp = fn.stamp
INNER JOIN cp
If the value in the column tab is 8 and the value in column pais is 3 I should return the value in column ve83.
Thanks for all the help!
The only sensible option is to go back to the business meaning of the data and redesign the database according to that, instead of according to "technique-oriented abstractions" such as these that SQL was never intended to support.
The main reason for this is that SQL was founded on FIRST order logic, and this precludes supporting stuff like varying domains. Which you are doing (or at least seeking to do) because ve12 could be a DATETIME and ve83 could be a VARCHAR and ve56 coulb be a BLOB etc. etc. So there is just no way for you [or anyone else] to determine the data type of the results in your query, and it is even more impossible to attach meaning to what comes out of your desired query precisely because of this varying-domain and varying-source characteristic.

Is there a way to query a ranged Expression in DB?

Our application is a Mainframe which is a IBM iSeries – DB2 database set up. Some of our table values have a range.
Ex: 100;105;108;110:160;180
-- UPDATE --
The above data is from a single row (Single column to be precise). In the same format there would be multiple values (on various rows)
It this case, individual values are delimited by a “;” but 110:160 is a range. It includes all the values from 110 to 160. Now, for the individual values we were using like statements obviously. Ex; if I have to query for 105.
The challenge here is, if I had to query 125 which is technically not present in the database. However, logically I need to retrieve that record.
The system (application) somehow was able to accomplish this, I am not sure how. I am not a mainframe developer, I just had to query the database to retrieve a specific record for some of the automation that we work on.
As a workaround, I could think of two things:
Expand the ranges and store it in a temp database programmatically.
Ex: 110:160 would be expanded to 110;111;112..160 (Yes, it’s tedious)
Reduce the number of records, by filtering through certain unique colums (the one’w which are without ranges) then programmatically apply a logic to identify the right record
As both are workarounds, I was so curious to how the system does it. (I reached out to dev’s of the app. So far, no luck). So is there a direct approach to achieve this ? Could it be a stored procedure ?
If i got your question right your example values are not in a single row but in multiple - otherwise some preprocessing has to be done.
I would destruct the combined value into its components with SQL - like:
with temp(id, text, value1, value2) as (
select id, text
,case when posstr(id,':') > 0
then substr(id, 1, posstr(id,':') - 1)
else id
end as value1
,case when posstr(id,':') > 0
then substr(id, posstr(id,':')+1 , length(id))
else id
end as value2
from testrange
)
select * from temp
where 125 between value1 and value2

Improving performance on an alphanumeric text search query

I have table where millions of records are there I'm just posting sample data. Actually I'm looking to get only Endorsement data by using LIKE or LEFT but there is no difference between them in Execution time. IS there any fine way to get data in less time while dealing with Alphanumeric Data. I have 4.4M records in table. Suggest me
declare #t table (val varchar(50))
insert into #t(val)values
('0-1AB11BC11yerw123Endorsement'),
('0-1AB114578Endorsement'),
('0-1BC11BC11yerw122553Endorsement'),
('0-1AB11BC11yerw123newBusiness'),
('0-1AB114578newBusiness'),
('0-1BC11BC11yerw122553newBusiness'),
('0-1AB11BC11yerw123Renewal'),
('0-1AB114578Renewal'),
('0-1BC11BC11yerw122553Renewal')
SELECT * FROM #t where RIGHT(val,11) = 'Endorsement'
SELECT * FROM #t where val like '%Endorsement%'
Imagine you'd have to find names in a telephone book that end with a certain string. All you could do is read every single name and compare. It doesn't help you at all to see where the names with A, B, C, etc. start, because you are not interested in the initial characters of the names but only in the last characters instead. Well, the only thing you could do to speed this up is ask some friends to help you and each person scans a range of pages only. In a DBMS it is the same. The DBMS performs a full table scan and does this parallelized if possible.
If however you had a telephone book listing the words backwards, so you'd see which words end with A, B, C, etc., that sure would help. In SQL Server: Create a computed column on the reverse string:
alter table t add reverse_val as reverse(val);
And add an index:
create index idx_reverse_val on t(reverse_val);
Then query the string with LIKE. The DBMS should notice that it can use the index for speeding up the search process.
select * from t where reverse_val like reverse('Endorsement') + '%';
Having said this, it seems strange that you are interested in the end of your strings at all. In a good database you store atomic information, e.g. you would not store a person's name and birthdate in the same column ('John Miller 12.12.2000'), but in separate columns instead. Sure, it does happen that you store names and want to look for names starting with, ending with, containing substrings, but this is a rare thing after all. Check your column and think about whether its content should be separate columns instead. If you had the string ('Endorsement', 'Renewal', etc.) in a separate column, this would really speed up the lookup, because all you'd have to do is ask where val = 'Endorsement' and with an index on that column this is a super-simple task for the DBMS.
try charindex or patindex:
SELECT *
FROM #t t
WHERE CHARINDEX('endorsement', t.val) > 0
SELECT *
FROM #t t
WHERE PATINDEX('%endorsement%', t.val) > 0
CREATE TABLE tbl
(val varchar(50));
insert into tbl(val)values
('0-1AB11BC11yerw123Endorsement'),
('0-1AB114578Endorsement'),
('0-1BC11BC11yerw122553Endorsement'),
('0-1AB11BC11yerw123newBusiness'),
('0-1AB114578newBusiness'),
('0-1BC11BC11yerw122553newBusiness'),
('0-1AB11BC11yerw123Renewal'),
('0-1AB114578Renewal'),
('0-1BC11BC11yerw122553Renewal');
CREATE CLUSTERED INDEX inx
ON dbo.tbl(val)
SELECT * FROM tbl where val like '%Endorsement';
--LIKE '%Endorsement' will give better performance it will utilize the index well efficiently than RIGHT(val,ll)

Google Fusion queries: using wildcard for LOCATION parameter

I am querying a Google Fusion table as follows:
SELECT * FROM {mytableIDhere} WHERE col1 LIKE '%{mystringhere}%'
This shows all row data for entries containing {mystringhere} in column 1. Is it possible to use a * wildcard on the column field, so that the search applies to ANY column? I tried that, but I get a parse error when doing:
SELECT * FROM {mytableIDhere} WHERE * LIKE '%{mystringhere}%'
It seems that the location parameter cannot be open-ended. Is this correct? If so, is there a workaround?
EDIT: In terms of desired functionality, I am trying to create a global search of a table so that any rows containing the search query will be returned. The columns include e-mail addresses, ID numbers, commentary, and other values.

Get all records that contain a number

It is possible to write a query get all that record from a table where a certain field contains a numeric value?
something like "select street from tbladdress where street like '%0%' or street like '%1%' ect ect"
only then with one function?
Try this
declare #t table(street varchar(50))
insert into #t
select 'this address is 45/5, Some Road' union all
select 'this address is only text'
select street from #t
where street like '%[0-9]%'
street
this address is 45/5, Some Road
Yes, but it will be inefficient, and probably slow, with a wildcard on the leading edge of the pattern
LIKE '%[0-9]%'
Searching for text within a column is horrendously inefficient and does not scale well (per-row functions, as a rule, all have this problem).
What you should be doing is trading disk space (which is cheap) for performance (which is never cheap) by creating a new column, hasNumerics for example, adding an index to it, then using an insert/update trigger to set it based on the data going into the real column.
This means the calculation is done only when the row is created or modified, not every single time you extract the data. Databases are almost always read far more often than they're written and using this solution allows you to amortize the cost of the calculation over many select statement executions.
Then, when you want your data, just use:
select * from mytable where hasNumerics = 1; -- or true or ...
and watch it leave a regular expression query or like '%...%' monstrosity in its dust.
To fetch rows that contain only numbers,use this query
select street
from tbladdress
where upper(street) = lower(street)
Works in oracle .
I found this solution " select street from tbladresse with(nolock) where patindex('%[0-9]%',street) = 1"
it took me 2 mins to search 3 million on an unindexed field