I have one of the column in oracle table which has below value :
select csv_val from my_table where date='09-OCT-18';
output
==================
50,100,25,5000,1000
I want this values to be in ascending order with select query, output would looks like :
output
==================
25,50,100,1000,5000
I tried this link, but looks like it has some restriction on number of digits.
Here, I made you a modified version of the answer you linked to that can handle an arbitrary (hardcoded) number of commas. It's pretty heavy on CTEs. As with most LISTAGG answers, it'll have a 4000-char limit. I also changed your regexp to be able to handle null list entries, based on this answer.
WITH
T (N) AS --TEST DATA
(SELECT '50,100,25,5000,1000' FROM DUAL
UNION
SELECT '25464,89453,15686' FROM DUAL
UNION
SELECT '21561,68547,51612' FROM DUAL
),
nums (x) as -- arbitrary limit of 20, can be changed
(select level from dual connect by level <= 20),
splitstr (N, x, substring) as
(select N, x, regexp_substr(N, '(.*?)(,|$)', 1, x, NULL, 1)
from T
inner join nums on x <= 1 + regexp_count(N, ',')
order by N, x)
select N, listagg(substring, ',') within group (order by to_number(substring)) as sorted_N
from splitstr
group by N
;
Probably it can be improved, but eh...
Based on sample data you posted, relatively simple query would work (you need lines 3 - 7). If data doesn't really look like that, query might need adjustment.
SQL> with my_table (csv_val) as
2 (select '50,100,25,5000,1000' from dual)
3 select listagg(token, ',') within group (order by to_number(token)) result
4 from (select regexp_substr(csv_val, '[^,]+', 1, level) token
5 from my_table
6 connect by level <= regexp_count(csv_val, ',') + 1
7 );
RESULT
-------------------------
25,50,100,1000,5000
SQL>
Related
I have the below one column "table" (apologies for the data model, not my fault :():
COL_IN
------
2K, E
E, 2K
O
I would like to obtain the below output, ordered by count descending:
COL_OUT COUNT
----------
K 4
E 2
O 1
COUNT is a reserved keyword, so it's not a good column name - even in the final output. I use COUNT_ instead (with an underscore).
Other than that, you can modify the input strings so they become valid JSON arrays, so that you can then use JSON functions to split them. After you split the strings into tokens, it's a simple matter to separate the leading number (if present) from the rest of the string, and to aggregate. NVL in the sum adds 1 for each token without a leading integer.
Including the sample data for testing only (if you have an actual table, remove the WITH clause at the top):
with
tbl (col_in) as (
select '2K, E' from dual union all
select 'E, 2K' from dual union all
select 'O' from dual
)
select ltrim(col, '0123456789') as col_out
, sum(nvl(to_number(regexp_substr(col, '^\d*')), 1)) as count_
from tbl,
json_table('["' || regexp_replace(col_in, ', *', '","') || '"]', '$[*]'
columns col path '$')
group by ltrim(col, '0123456789')
order by count_ desc, col_out
;
COL_OUT COUNT_
------- ------
K 4
E 2
O 1
You can use hierarchical query in such a way that
WITH t2 AS
(
SELECT TRIM(REGEXP_SUBSTR(col_in,'[^,]+',1,level)) AS s
FROM t
CONNECT BY level <= REGEXP_COUNT(col_in,',')+1
AND PRIOR SYS_GUID() IS NOT NULL
AND PRIOR col_in = col_in
)
SELECT REGEXP_SUBSTR(s,'[^0-9]') AS col_out,
SUM(NVL(REGEXP_SUBSTR(s,'[^[:alpha:]]'),1)) AS count
FROM t2
GROUP BY REGEXP_SUBSTR(s,'[^0-9]'),REGEXP_SUBSTR(s,'[^[:alpha:]]')
ORDER BY count DESC
presuming all of the data are alphanumeric only(eg.not containing special charaters such as $,#,! ..etc.)
I just had this idea of how can i loop in sql?
For example
I have this column
PARAMETER_VALUE
E,C;S,C;I,X;G,T;S,J;S,F;C,S;
i want to store all value before (,) in a temp column also store all value after (;) into another column
then it wont stop until there is no more value after (;)
Expected Output for Example
COL1 E S I G S S C
COL2 C C X T J F S
etc . . .
You can get by using regexp_substr() window analytic function with connect by level <= clause
with t1(PARAMETER_VALUE) as
(
select 'E,C;S,C;I,X;G,T;S,J;S,F;C,S;' from dual
), t2 as
(
select level as rn,
regexp_substr(PARAMETER_VALUE,'([^,]+)',1,level) as str1,
regexp_substr(PARAMETER_VALUE,'([^;]+)',1,level) as str2
from t1
connect by level <= regexp_count(PARAMETER_VALUE,';')
)
select listagg( regexp_substr(str1,'([^;]+$)') ,' ') within group (order by rn) as col1,
listagg( regexp_substr(str2,'([^,]+$)') ,' ') within group (order by rn) as col2
from t2;
COL1 COL2
------------- -------------
E S I G S S C C C X T J F S
Demo
Assuming that you need to separate the input into rows, at the ; delimiters, and then into columns at the , delimiter, you could do something like this:
-- WITH clause included to simulate input data. Not part of the solution;
-- use actual table and column names in the SELECT statement below.
with
t1(id, parameter_value) as (
select 1, 'E,C;S,C;I,X;G,T;S,J;S,F;C,S;' from dual union all
select 2, ',U;,;V,V;' from dual union all
select 3, null from dual
)
-- End of simulated input data
select id,
level as ord,
regexp_substr(parameter_value, '(;|^)([^,]*),', 1, level, null, 2) as col1,
regexp_substr(parameter_value, ',([^;]*);' , 1, level, null, 1) as col2
from t1
connect by level <= regexp_count(parameter_value, ';')
and id = prior id
and prior sys_guid() is not null
order by id, ord
;
ID ORD COL1 COL2
--- --- ---- ----
1 1 E C
1 2 S C
1 3 I X
1 4 G T
1 5 S J
1 6 S F
1 7 C S
2 1 U
2 2
2 3 V V
3 1
Note - this is not the most efficient way to split the inputs (nothing will be very efficient - the data model, which is in violation of First Normal Form, is the reason). This can be improved using standard instr and substr, but the query will be more complicated, and for that reason, harder to maintain.
I generated more input data, to illustrate a few things. You may have several inputs that must be broken up at the same time; that must be done with care. (Note the additional conditions in CONNECT BY). I also illustrate the handling of NULL - if a comma comes right after a semicolon, that means that the "column 1" part of that pair must be NULL. That is shown in the output.
i need to generate some sample data from a population. I want to do this with an SQL query on an Oracle 11g database.
Here is a simple working example with population size 4 and sample size 2:
with population as (
select 1 as val from dual union all
select 2 from dual union all
select 3 from dual union all
select 4 from dual)
select val from (
select val, dbms_random.value(0,10) AS RANDORDER
from population
order by randorder)
where rownum <= 2
(the oracle sample() funtion didn't work in connection with the WITH-clause for me)
But now I, I want to "upscale" or multiply my sample data. So that I can get something like 150 % sample data of the population data (population size 4 and sample size 6, e.g.)
Is there a good way to achieve this with an SQL query?
You could use CONNECT BY:
with population(val, RANDOMORDER) as (
select level, dbms_random.value(0,10) AS RANDORDER
from dual
connect by level <= 6
ORDER BY RANDORDER
)
select val
FROM population
WHERE rownum <= 4;
db<>fiddle demo
The solution depends, if you want all rows from first initial set(s) and random additional rows from last one then use:
with params(size_, sample_) as (select 4, 6 from dual)
select val
from (
select mod(level - 1, size_) + 1 val, sample_,
case when level <= size_ * floor(sample_ / size_) then 0
else dbms_random.value()
end rand
from params
connect by level <= size_ * ceil(sample_ / size_)
order by rand)
where rownum <= sample_
But if you allow possibility of result like (1, 1, 2, 2, 3, 3), where some values may not appear at all in output (here 4) then use this:
with params(size_, sample_) as (select 4, 6 from dual)
select val
from (
select mod(level - 1, size_) + 1 val, sample_, dbms_random.value() rand
from params
connect by level <= size_ * ceil(sample_ / size_)
order by rand)
where rownum <= sample_
How it works? We build set of (1, 2, 3, 4) as many times as it results from division sample / size. Then we assign random values. In first case I assign 0 to first set(s), so they will be in output for sure, and random values to last set. In second case randoms are assigned to all rows.
I am trying out my hands on oracle 11g. I have a requirement such that I want to fetch those id from list which does not exists in table.
For example:
SELECT * FROM STOCK
where item_id in ('1','2'); // Return those records where result is null
I mean if item_id '1' is not present in db then the query should return me 1.
How can I achieve this?
You need to store the values in some sort of "table". Then you can use left join or not exists or something similar:
with ids as (
select 1 as id from dual union all
select 2 from dual
)
select ids.id
from ids
where not exists (select 1 from stock s where s.item_id = ids.id);
You can use a LEFT JOIN to an in-line table that contains the values to be searched:
SELECT t1.val
FROM (
SELECT '1' val UNION ALL SELECT '2'
) t1
LEFT JOIN STOCK t2 ON t1.val = t2.item_id
WHERE t2.item_id IS NULL
First create the list of possible IDs (e.g. 0 to 99 in below query). You can use a recursive cte for this. Then select these IDs and remove the IDs already present in the table from the result:
with possible_ids(id) as
(
select 0 as id from dual
union all
select id + 1 as id from possible_ids where id < 99
)
select id from possible_ids
minus
select item_id from stock;
A primary concern of the OP seems to be a terse notation of the query, notably the set of values to test for. The straightforwwrd recommendation would be to retrieve these values by another query or to generate them as a union of queries from the dual table (see the other answers for this).
The following alternative solution allows for a verbatim specification of the test values under the following conditions:
There is a character that does not occur in any of the test values provided ( in the example that will be - )
The number of values to test stays well below 2000 (to be precise, the list of values plus separators must be written as a varchar2 literal, which imposes the length limit ). However, this should not be an actual concern - If the test involves lists of hundreds of ids, these lists should definitely be retrieved froma table/view.
Caveat
Whether this method is worth the hassle ( not to mention potential performance impacts ) is questionable, imho.
Solution
The test values will be provided as a single varchar2 literal with - separating the values which is as terse as the specification as a list argument to the IN operator. The string starts and ends with -.
'-1-2-3-156-489-4654648-'
The number of items is computed as follows:
select cond, regexp_count ( cond, '[-]' ) - 1 cnt_items from (select '-1-2-3-156-489-4654648-' cond from dual)
A list of integers up to the number of items starting with 1 can be generated using the LEVEL pseudocolumn from hierarchical queries:
select level from dual connect by level < 42;
The n-th integer from that list will serve to extract the n-th value from the string (exemplified for the 4th value) :
select substr ( cond, instr(cond,'-', 1, 4 )+1, instr(cond,'-', 1, 4+1 ) - instr(cond,'-', 1, 4 ) - 1 ) si from (select cond, regexp_count ( cond, '[-]' ) - 1 cnt_items from (select '-1-2-3-156-489-4654648-' cond from dual) );
The non-existent stock ids are generated by subtracting the set of stock ids from the set of values. Putting it all together:
select substr ( cond, instr(cond,'-',1,level )+1, instr(cond,'-',1,level+1 ) - instr(cond,'-',1,level ) - 1 ) si
from (
select cond
, regexp_count ( cond, '[-]' ) - 1 cnt_items
from (
select '-1-2-3-156-489-4654648-' cond from dual
)
)
connect by level <= cnt_items + 1
minus
select item_id from stock
;
Using the DUAL table, how can I get a list of numbers from 1 to 100?
Your question is difficult to understand, but if you want to select the numbers from 1 to 100, then this should do the trick:
Select Rownum r
From dual
Connect By Rownum <= 100
Another interesting solution in ORACLE PL/SQL:
SELECT LEVEL n
FROM DUAL
CONNECT BY LEVEL <= 100;
Using Oracle's sub query factory clause: "WITH", you can select numbers from 1 to 100:
WITH t(n) AS (
SELECT 1 from dual
UNION ALL
SELECT n+1 FROM t WHERE n < 100
)
SELECT * FROM t;
Do it the hard way. Use the awesome MODEL clause:
SELECT V
FROM DUAL
MODEL DIMENSION BY (0 R)
MEASURES (0 V)
RULES ITERATE (100) (
V[ITERATION_NUMBER] = ITERATION_NUMBER + 1
)
ORDER BY 1
Proof: http://sqlfiddle.com/#!4/d41d8/20837
You could use XMLTABLE:
SELECT rownum
FROM XMLTABLE('1 to 100');
-- alternatively(useful for generating range i.e. 10-20)
SELECT (COLUMN_VALUE).GETNUMBERVAL() AS NUM
FROM XMLTABLE('1 to 100');
DBFiddle Demo
If you want your integers to be bound between two integers (i.e. start with something other than 1), you can use something like this:
with bnd as (select 4 lo, 9 hi from dual)
select (select lo from bnd) - 1 + level r
from dual
connect by level <= (select hi-lo from bnd);
It gives:
4
5
6
7
8
Peter's answer is my favourite, too.
If you are looking for more details there is a quite good overview, IMO, here.
Especially interesting is to read the benchmarks.
Using GROUP BY CUBE:
SELECT ROWNUM
FROM (SELECT 1 AS c FROM dual GROUP BY CUBE(1,1,1,1,1,1,1) ) sub
WHERE ROWNUM <=100;
Rextester Demo
A variant of Peter's example, that demonstrates a way this could be used to generate all numbers between 0 and 99.
with digits as (
select mod(rownum,10) as num
from dual
connect by rownum <= 10
)
select a.num*10+b.num as num
from digits a
,digits b
order by num
;
Something like this becomes useful when you are doing batch identifier assignment, and looking for the items that have not yet been assigned.
For example, if you are selling bingo tickets, you may want to assign batches of 100 floor staff (guess how i used to fund raise for sports). As they sell a batch, they are given the next batch in sequence. However, people purchasing the tickets can select to purchase any tickets from the batch. The question may be asked, "what tickets have been sold".
In this case, we only have a partial, random, list of tickets that were returned within the given batch, and require a complete list of all possibilities to determine which we don't have.
with range as (
select mod(rownum,100) as num
from dual
connect by rownum <= 100
),
AllPossible as (
select a.num*100+b.num as TicketNum
from batches a
,range b
order by num
)
select TicketNum as TicketsSold
from AllPossible
where AllPossible.Ticket not in (select TicketNum from TicketsReturned)
;
Excuse the use of key words, I changed some variable names from a real world example.
... To demonstrate why something like this would be useful
I created an Oracle function that returns a table of numbers
CREATE OR REPLACE FUNCTION [schema].FN_TABLE_NUMBERS(
NUMINI INTEGER,
NUMFIN INTEGER,
EXPONENCIAL INTEGER DEFAULT 0
) RETURN TBL_NUMBERS
IS
NUMEROS TBL_NUMBERS;
INDICE NUMBER;
BEGIN
NUMEROS := TBL_NUMBERS();
FOR I IN (
WITH TABLA AS (SELECT NUMINI, NUMFIN FROM DUAL)
SELECT NUMINI NUM FROM TABLA UNION ALL
SELECT
(SELECT NUMINI FROM TABLA) + (LEVEL*TO_NUMBER('1E'||TO_CHAR(EXPONENCIAL))) NUM
FROM DUAL
CONNECT BY
(LEVEL*TO_NUMBER('1E'||TO_CHAR(EXPONENCIAL))) <= (SELECT NUMFIN-NUMINI FROM TABLA)
) LOOP
NUMEROS.EXTEND;
INDICE := NUMEROS.COUNT;
NUMEROS(INDICE):= i.NUM;
END LOOP;
RETURN NUMEROS;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN NUMEROS;
WHEN OTHERS THEN
RETURN NUMEROS;
END;
/
Is necessary create a new data type:
CREATE OR REPLACE TYPE [schema]."TBL_NUMBERS" IS TABLE OF NUMBER;
/
Usage:
SELECT COLUMN_VALUE NUM FROM TABLE([schema].FN_TABLE_NUMBERS(1,10))--integers difference: 1;2;.......;10
And if you need decimals between numbers by exponencial notation:
SELECT COLUMN_VALUE NUM FROM TABLE([schema].FN_TABLE_NUMBERS(1,10,-1));--with 0.1 difference: 1;1.1;1.2;.......;10
SELECT COLUMN_VALUE NUM FROM TABLE([schema].FN_TABLE_NUMBERS(1,10,-2));--with 0.01 difference: 1;1.01;1.02;.......;10
If you want to generate the list of numbers 1 - 100 you can use the cartesian product of {1,2,3,4,5,6,6,7,8,9,10} X {0,10,20,30,40,50,60,70,80,90}
https://en.wikipedia.org/wiki/Cartesian_product
Something along the lines of the following:
SELECT
ones.num + tens.num
FROM
(
SELECT 1 num UNION ALL
SELECT 2 num UNION ALL
SELECT 3 num UNION ALL
SELECT 4 num UNION ALL
SELECT 5 num UNION ALL
SELECT 6 num UNION ALL
SELECT 7 num UNION ALL
SELECT 9 num UNION ALL
SELECT 10 num
) as ones
CROSS JOIN
(
SELECT 0 num UNION ALL
SELECT 10 num UNION ALL
SELECT 20 num UNION ALL
SELECT 30 num UNION ALL
SELECT 40 num UNION ALL
SELECT 50 num UNION ALL
SELECT 60 num UNION ALL
SELECT 70 num UNION ALL
SELECT 80 num UNION ALL
SELECT 90 num
) as tens;
I'm not able to test this out on an oracle database, you can place the dual where it belongs but it should work.
SELECT * FROM `DUAL` WHERE id>0 AND id<101
The above query is written in SQL in the database.