How to make result set from ('1','2','3')? - sql

I have a question, how can i make a result set making only list of values. For example i have such values : ('1','2','3')
And i want to make a sql that returns such table:
1
2
3
Thanks.
[Edit]
Sorry for wrong question.
Actually list not containing integers, but it contains strings.
I am currently need like ('aa','bb,'cc').
[/Edit]

If you want to write a SQL statement which will take a comma separate list and generate an arbitrary number of actually rows the only real way would be to use a table function, which calls a PL/SQL function which splits the input string and returns the elements as separate rows.
Check out this link for an intro to table-functions.
Alternatively, if you can construct the SQL statement programmatically in your client you can do:
SELECT 'aa' FROM DUAL
UNION
SELECT 'bb' FROM DUAL
UNION
SELECT 'cc' FROM DUAL

The best way I've found is using XML.
SELECT items.extract('/l/text()').getStringVal() item
FROM TABLE(xmlSequence(
EXTRACT(XMLType(''||
REPLACE('aa,bb,cc',',','')||'')
,'/all/l'))) items;
Wish I could take credit but alas : http://pbarut.blogspot.com/2006/10/binding-list-variable.html.
Basically what it does is convert the list to an xmldocument then parse it back out.

The easiest way is to abuse a table that is guaranteed to have enough rows.
-- for Oracle
select rownum from tab where rownum < 4;
If that is not possible, check out Oracle Row Generator Techniques.
I like this one (requires 10g):
select integer_value
from dual
where 1=2
model
dimension by ( 0 as key )
measures ( 0 as integer_value )
rules upsert ( integer_value[ for key from 1 to 10 increment 1 ] = cv(key) )
;

One trick I've used in various database systems (not just SQL databases) is actually to have a table which just contains the first 100 or 1000 integers. Such a table is very easy to create programatically, and your query then becomes:
SELECT value FROM numbers WHERE value < 4 ORDER BY value
You can use the table for lots of similar purposes.

Related

Is there a system table that updates and returns its row based on a query?

DUAL is a system table that is used to get constants and results of system functions.
However it only has one column named "dummy" and only a row with X value in it so this doesnt work:
My question is, is there a system table that can pull this trick off? A (single-column) table that returns a row regardless of how its one column is queried, with the value in the where clause.
What you want to do violates how SQL works. However, if you always wnat to return exactly one row from a "table", you can use aggregation with no group by:
select max(dummy)
from dual
where dummy = '5'
The returns value is NULL.
Perhaps it would be easier to suggest something if you explained what problem you are trying to solve.
As you asked:
A (single-column) table that returns a row regardless of how its one column is queried, with the value in the where clause.
SQL> select dummy from dual where dummy = '5' or 1 = 1;
D
-
X
SQL>
dual is a (single-column) table
this query returned a row
value ('5') is in its where clause
I presume that "regardless of how its one column is queried" is also satisfied
Why "where clause"?
SELECT '5' AS dummy FROM dual
DUMMY|
-----|
5 |

Firebird SQL: pass multi-row data in the query text

is it possible to use an array of elements as a select statement?
I know it is possiible to get rows based on static elements like this:
SELECT 405, CAST('4D6178' AS VARCHAR(32)), CAST('2017-01-01 00:00:00' AS TIMESTAMP) FROM rdb$databas
That will give you a table select with one row.
Now I would like to get this as table with n rows, but I don't know how to achieve this. Due to the fact that firebird doesn't allow multiple select statements I cannot only append n times a selec.
Info : Firebird 2.1
Use UNION ALL clause.
https://en.wikipedia.org/wiki/Set_operations_(SQL)#UNION_operator
Select x,y,z From RDB$DATABASE
UNION ALL
Select a,b,c From RDB$DATABASE
UNION ALL
Select k,l,m From RDB$DATABASE
Notice however that this should only be used for small data. Firebird query length is limited to 64KB and even if that would not be so - abusing this method to inject lots of data would not be good.
If you really need to enter lot of similar (same structure) data rows - use Global Temporary Tables
The following discussion hopefully would give you more insights:
https://stackoverflow.com/a/43997801/976391

what is the maximum value we can use with IN operator in sql [duplicate]

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.

Trying to generate a subset from an IN statement

I am writing a solution for a user that matches a list of phone numbers they enter against a customer database.
The user needs to enter a comma separated list of phone numbers (integers), and the query needs to tell the user which phone numbers from their list are NOT in the database.
The only way I could think to do this is by first creating a subset NUMBER_LIST that includes all of the phone numbers that I can join and then exclude that list from what I bring back from my customer database.
WITH NUMBER_LIST AS (
SELECT INTEGERS
FROM (
SELECT level - 1 + 8000000000 INTEGERS
FROM dual
CONNECT BY level <= 8009999999-8000000000+1
)
WHERE INTEGERS IN (8001231001,8001231003,8001231234,8001231235,...up to 1000 phone numbers)
)
The problem here is the above code works fine to create my subset, for numbers between 800-000-0000 and 800-999-9999. The phone numbers in my list and customer database can be ANY range (not just 800 numbers). I did this just as a test. It takes about 6 seconds to generate the subset from that query. If I create the CONNECT BY LEVEL to include all numbers from 100-000-0000 to 999-999-9999 that is running my query out of memory to create a subset that large (and I believe it is ridiculously overkill to create a huge list and break it down using my IN statement).
The problem is creating the initial subset. I can handle the rest of the query, but I need to be able to generate the subset of numbers to query against my customer database from my IN statement.
Few things to remember:
I don't have the ability to load the numbers in a temporary table first. The user will be entering the "IN(...,...,...)" statement themselves.
This needs to be a single statement, no extra functions or variable declarations
The database is Oracle 10g, and I am using SQL Developer to create the query.
The user understands that they can only enter 1000 numbers into the IN statement. This needs to be robust enough to select any 1000 numbers from the entire area code range.
The end result is to get a list of phone numbers that ARE NOT in the database. A simple NOT IN... will not work, because that will bring back which numbers are in the database, but not in my list.
How can I make this work for all numbers between 1000000000-9999999999 (or all U.S. 10-digit phone number possibilities). I may be going about it completely wrong to generate my initial HUGE list and then excluding everything other than my IN statement, but I'm not sure where to go from here.
Thanks so much for your help in advance. I've learned so much from all of you.
You could use the following:
SELECT *
FROM (SELECT regexp_substr(&x, '[^,]+', 1, LEVEL) phone_number
FROM dual
CONNECT BY LEVEL <= length(&x) - length(REPLACE(&x, ',', '')) + 1)
WHERE phone_number NOT IN (SELECT phone_table.phone_number
FROM phone_table)
The first query will build a list with the individual phone numbers.
This problem is very closely related to the 'how do I bind an in list' problem, which has come up on here a few times. I posted an answer Dynamic query with HibernateCritera API & Oracle - performance in the past.
Something like this should do what you want:
create table phone_nums (phone varchar2(10));
insert into phone_nums values ('12345');
insert into phone_nums values ('23456');
with bound_inlist
as
(
select
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a
where not exists (select null from phone_nums where phone = token);
Here the list of comma separated phone numbers is bound into the query, so you are using bind variables correctly, and you will be able to enter probably an unlimited number of phone numbers to check in one go (although I would check both the 4000 and 32767 character boundaries to be sure).
You say you can't use temp tables or procs or custom functions -- it would be a simple task if you could.
What's the client tool being used to submit this query? Is there a reason why you can't query all phone numbers from the database and do the compare on the client?
If you are constrained to the point that it MUST be solved with IN (n1,n2,n3,...,n1000), then your approach would appear to be the only solution.
As you mentioned though, that's a big list you're creating up front.
Are you able to adapt your approach slightly?
WITH NUMBER_LIST (number) AS (
SELECT n1 FROM DUAL
UNION ALL SELECT n2 FROM DUAL
UNION ALL SELECT n3 FROM DUAL
...
UNION ALL SELECT n1000 FROM DUAL
)

Sqlite : Sql to finding the most complete prefix

I have a sqlite table containing records of variable length number prefixes. I want to be able to find the most complete prefix against another variable length number in the most efficient way:
eg. The table contains a column called prefix with the following numbers:
1. 1234
2. 12345
3. 123456
What would be an efficient sqlite query to find the second record as being the most complete match against 12345999.
Thanks.
A neat trick here is to reverse a LIKE clause -- rather than saying
WHERE prefix LIKE '...something...'
as you would often do, turn the prefix into the pattern by appending a % to the end and comparing it to your input as the fixed string. Order by length of prefix descending, and pick the top 1 result.
I've never used Sqlite before, but just downloaded it and this works fine:
sqlite> CREATE TABLE whatever(prefix VARCHAR(100));
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('1234');
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('12345');
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('123456');
sqlite> SELECT * FROM whatever WHERE '12345999' LIKE (prefix || '%')
ORDER BY length(prefix) DESC LIMIT 1;
output:
12345
Personally I use next method, it will use indexes:
statement '('1','12','123','1234','12345','123459','1234599','12345999','123459999')'
should be generated by client
SELECT * FROM whatever WHERE prefix in
('1','12','123','1234','12345','123459','1234599','12345999','123459999')
ORDER BY length(prefix) DESC LIMIT 1;
select foo, 1 quality from bar where foo like "123*"
union
select foo, 2 quality from bar where foo like "1234*"
order by quality desc limit 1
I haven't tested it, but the idea would work in other dialects of SQL
a couple of assumptions.
you are joining with some other table so you want to know the largest variable length prefix for each record in the table you are joining with.
your table of prefixes is actually more than just the three you provide in your example...otherwise you can hardcode the logic and move on.
prefix_table.prefix
1234
12345
123456
etc.
foo.field
12345999
123999
select
a.field,
b.prefix,
max(len(b.prefix)) as length
from
foo a inner join prefix_table b on b.prefix = left(a.field, len(b.prefix))
group by
a.field,
b.prefix
note that this is untested but logically should make sense.
Without resorting to a specialized index, the best performing strategy may be to hunt for the answer.
Issue a LIKE query for each possible prefix, starting with the longest. Stop once you get rows returned.
It's certainly not the prettiest way to achieve what you wan't but as opposed to the other suggestions, indexes will be considered by the query planner. As always, it depends on your actual data. In particular, on how many rows in your table, and how long the average hunt will be.