SQL query : how to check existence of multiple rows with one query - sql

I have this table MyTable:
PROG VALUE
-------------
1 aaaaa
1 bbbbb
2 ccccc
4 ddddd
4 eeeee
now I'm checking the existence of a tuple with a certain id with a query like
SELECT COUNT(1) AS IT_EXISTS
FROM MyTable
WHERE ROWNUM = 1 AND PROG = {aProg}
For example I obtain with aProg = 1 :
IT_EXISTS
---------
1
I get with aProg = 3 :
IT_EXISTS
---------
0
The problem is that I must do multiple queries, one for every value of PROG to check.
What I want is something that with a query like
SELECT PROG, ??? AS IT_EXISTS
FROM MyTable
WHERE PROG IN {1, 2,3, 4, 5} AND {some other condition}
I can get something like
PROG IT_EXISTS
------------------
1 1
2 1
3 0
4 1
5 0
The database is Oracle...
Hope I'm clear
regards
Paolo

Take a step back and ask yourself this: Do you really need to return the rows that don't exist to solve your problem? I suspect the answer is no. Your application logic can determine that records were not returned which will allow you to simplify your query.
SELECT PROG
FROM MyTable
WHERE PROG IN (1, 2, 3, 4, 5)
If you get a row back for a given PROG value, it exists. If not, it doesn't exist.
Update:
In your comment in the question above, you stated:
the prog values are from others tables. The table of the question has only a subset of the all prog values
This suggests to me that a simple left outer join could do the trick. Assuming your other table with the PROG values you're interested in is called MyOtherTable, something like this should work:
SELECT a.PROG,
CASE WHEN b.PROG IS NOT NULL THEN 1 ELSE 0 END AS IT_EXISTS
FROM MyOtherTable AS a
LEFT OUTER JOIN MyTable AS b ON b.PROG = a.PROG
A WHERE clause could be tacked on to the end if you need to do some further filtering.

I would recommend something like this. If at most one row can match a prog in your table:
select p.prog,
(case when t.prog is null then 0 else 1 end) as it_exists
from (select 1 as prog from dual union all
select 2 as prog from dual union all
select 3 as prog from dual union all
select 4 as prog from dual union all
select 5 as prog from dual
) p left join
mytable t
on p.prog = t.prog and <some conditions>;
If more than one row could match, you'll want to use aggregation to avoid duplicates:
select p.prog,
max(case when t.prog is null then 0 else 1 end) as it_exists
from (select 1 as prog from dual union all
select 2 as prog from dual union all
select 3 as prog from dual union all
select 4 as prog from dual union all
select 5 as prog from dual
) p left join
mytable t
on p.prog = t.prog and <some conditions>
group by p.prog
order by p.prog;

One solution is to use (arguably abuse) a hierarchical query to create an arbitrarily long list of numbers (in my example, I've set the largest number to max(PROG), but you could hardcode this if you knew the top range you were looking for). Then select from that list and use EXISTS to check if it exists in MYTABLE.
select
PROG
, case when exists (select 1 from MYTABLE where PROG = A.PROG) then 1 else 0 end IT_EXISTS
from (
select level PROG
from dual
connect by level <= (select max(PROG) from MYTABLE) --Or hardcode, if you have a max range in mind
) A
;

It's still not very clear where you get the prog values to check. But if you can read them from a table, and assuming that the table doesn't contain duplicate prog values, this is the query I would use:
select a.prog, case when b.prog is null then 0 else 1 end as it_exists
from prog_values_to_check a
left join prog_values_to_check b
on a.prog = b.prog
and exists (select null
from MyTable t
where t.prog = b.prog)
If you do need to hard code the values, you can do it rather simply by taking advantage of the SYS.DBMS_DEBUG_VC2COLL function, which allows you to convert a comma-delimited list of values into rows.
with prog_values_to_check(prog) as (
select to_number(column_value) as prog
from table(SYS.DBMS_DEBUG_VC2COLL(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)) -- type your values here
)
select a.prog, case when b.prog is null then 0 else 1 end as it_exists
from prog_values_to_check a
left join prog_values_to_check b
on a.prog = b.prog
and exists (select null
from MyTable t
where t.prog = b.prog)
Note: The above queries take into account that the MyTable table may have multiple rows with the same prog value, but that you only want one row in the result. I make this assumption based the WHERE ROWNUM = 1 condition in your question.

Related

How do I select rows from table that have one or more than one specific value in a column?

I have a table containing data such as:
BP_NUMBER,CONTRACT_TYPE
0000123, 1
0000123, 2
0000123, 3
0000123, 4
0000124, 4
0000124, 4
0000124, 4
0000125, 4
0000126, 1
0000126, 5
I want to select rows containing one or more occurrences of CONTRACT_TYPE = 4. In other words, I want to know who are the clients with one or more contracts of the same type and type 4.
I tried this query:
SELECT * FROM (
SELECT BP_NUMBER, CONTRACT_TYPE, COUNT(*) OVER (PARTITION BY BP_NUMBER) CT FROM CONTRACTS
WHERE (1=1)
AND DATE = '18/10/2022'
AND CONTRACT_TYPE = 4)
WHERE CT= 1;
But it returns rows with only one occurrence of CONTRACT_TYPE = 4.
Also tried something like:
SELECT BP_NUMBER FROM CONTRACTS
WHERE (1=1)
AND CONTRACT_TYPE = 4
AND CONTRACT_TYPE NOT IN (SELECT CONTRACT_TYPE FROM CONTRACTS WHERE CONTRACT_TYPE != 4 GROUP BY CONTRACT_TYPE);
Trying to avoid any other contract types than 4. I really don't understand why it doesn't work.
The expected result would be:
0000124 --(4 occurrences of type 4)
0000125 --(1 occurrence of type 4)
Any help? Thanks
You can try something like this:
SELECT
BP_NUMBER
FROM CONTRACTS c1
WHERE CONTRACT_TYPE = 4
AND NOT EXISTS
(SELECT 1 FROM CONTRACTS c2 WHERE c2.BP_NUMBER = c1.BP_NUMBER
AND c2.CONTRACT_TYPE <> c1.CONTRACT_TYPE)
Depending on how you actually want to see it (and what other values you might want to include), you could either do a DISTINCT on the BP_NUMBER, or group on that column (and potentially others)
A similar result could also be achieved using an outer join between two instances of the CONTRACTS table. Essentially, you need the second instance of the same table so that you can exclude output rows when there are records with the "unwanted" contract types
You can just do the aggregation like here:
WITH
tbl AS
(
Select '0000123' "BP_NUMBER", '1' "CONTRACT_TYPE" From Dual Union All
Select '0000123', '2' From Dual Union All
Select '0000123', '3' From Dual Union All
Select '0000123', '4' From Dual Union All
Select '0000124', '4' From Dual Union All
Select '0000124', '4' From Dual Union All
Select '0000124', '4' From Dual Union All
Select '0000125', '4' From Dual Union All
Select '0000126', '1' From Dual Union All
Select '0000126', '5' From Dual
)
Select
BP_NUMBER "BP_NUMBER",
Count(*) "OCCURENCIES"
From
tbl
WHERE CONTRACT_TYPE = '4'
GROUP BY BP_NUMBER
ORDER BY BP_NUMBER
--
-- R e s u l t :
--
-- BP_NUMBER OCCURENCIES
-- --------- -----------
-- 0000123 1
-- 0000124 3
-- 0000125 1

How to union a hardcoded row after each grouped result

After every group / row i want to insert a hardcoded dummy row with a bunch of 'xxxx' to act a separator.
I would like to use oracle sql to do this query. i can execute it using a loop but i don't want to use plsql.
As the others suggest, it is best to do it on the front end.
However, if you have a burning need to be done as a query, here is how.
Here I did not use the rownum function as you have already done. I assume, your data is returned by a query, and you can replace my table with your query.
I made few more assumptions, as you have data with row numbers in it.
[I am not sure what do you mean by not PL/SQL]
Select Case When MOD(rownm, 2) = 0 then ' '
Else to_char((rownm + 1) / 2) End as rownm,
name, total, column1
From
(
select (rownm * 2 - 1) rownm,name, to_char(total) total ,column1 from t
union
SELECT (rownm * 2) rownm,'XXX' name, 'XXX' total, 'The row act .... ' column1 FROM t
) Q
Order by Q.rownm;
and here is the fiddle
Since you're already grouping the data, it might be easier to use GROUPING SETS instead of a UNION.
Grouping sets let you group by multiple sets of columns, including the same set twice to duplicate rows. Then the GROUP_ID function can be used to determine when the fake values should be used. This code will be a bit smaller than a UNION approach, and should be faster since it doesn't need to reference the table multiple times.
select
case when group_id() = 0 then name else '' end name,
case when group_id() = 0 then sum(some_value) else null end total,
case when group_id() = 1 then 'this rows...' else '' end column1
from
(
select 'jack' name, 22 some_value from dual union all
select 'jack' name, 1 some_value from dual union all
select 'john' name, 44 some_value from dual union all
select 'john' name, 1 some_value from dual union all
select 'harry' name, 1 some_value from dual union all
select 'harry' name, 1 some_value from dual
) raw_data
group by grouping sets (name, name)
order by raw_data.name, group_id();
You can use row generator technique (using CONNECT BY) and then use CASE..WHEN as follows:
SQL> SELECT CASE WHEN L.LVL = 1 THEN T.ROWNM END AS ROWNM,
2 CASE WHEN L.LVL = 1 THEN T.NAME
3 ELSE 'XXX' END AS NAME,
4 CASE WHEN L.LVL = 1 THEN TO_CHAR(T.TOTAL)
5 ELSE 'XXX' END AS TOTAL,
6 CASE WHEN L.LVL = 1 THEN T.COLUMN1
7 ELSE 'This row act as separator..' END AS COLUMN1
8 FROM T CROSS JOIN (
9 SELECT LEVEL AS LVL FROM DUAL CONNECT BY LEVEL <= 2
10 ) L ORDER BY T.ROWNM, L.LVL;
ROWNM NAME TOTAL COLUMN1
---------- ---------- ----- ---------------------------
1 Jack 23
XXX XXX This row act as separator..
2 John 45
XXX XXX This row act as separator..
3 harry 2
XXX XXX This row act as separator..
4 roy 45
XXX XXX This row act as separator..
5 Jacob 26
XXX XXX This row act as separator..
10 rows selected.
SQL>

Is there a concept which is the 'opposite' of SQL NULL?

Is there a concept (with an implementation - in Oracle SQL for starters) which behaves like a 'universal' matcher ?
What I mean is; I know NULL is not equal to anything - including NULL.
Which is why you have to be careful to 'IS NULL' rather than '=NULL' in SQL expressions.
I also know it is useful to use the NVL (in Oracle) function to detect a NULL and replace it with something in the output.
However: what you replace the NULL with using NVL has to match the datatype of the underlying column; otherwise you'll (rightly) get an error.
An example:
I have a table with a NULLABLE column 'name' of type VARCHAR2; and this contains a NULL row.
I can fetch out the NULL and replace it with an NVL like this:
SELECT NVL(name, 'NullyMcNullFace’) from my_table;
Great.
But if the column happens to a NUMBER (say 'age'), then I have to change my NVL:
SELECT NVL(age, 32) from my_table;
Also great.
Now if the column happens to be a DATE (say 'somedate'), then I have to change my NVL again:
SELECT NVL(somedate, sysdate) from my_table;
What I'm getting at here : is that in order to deal with NULLs you have to replace with a specific something ; and that specific something has to 'fit' the data-type.
So is there a construct/concept of (for want of a better word) like 'ANY' here.
Where 'ANY' would fit into a column of any datatype (like NULL), but (unlike NULL and unlike all other specific values) would match ANYTHING (including NULL - ? probably urghhh dunno).
So that I could do:
SELECT NVL(whatever_column, ANY) from my_table;
I think the answer is probably no; and probably 'go away, NULLs are bad enough - never mind this monster you have half-thought of'.
No, there's no "universal acceptor" value in SQL that is equal to everything.
What you can do is raise the NVL into your comparison. Like if you're trying to do a JOIN:
SELECT ...
FROM my_table AS m
JOIN other_table AS o ON o.name = NVL(m.name, o.name)
So if m.name is NULL, then the join will compare o.name to o.name, which is of course always true.
For other uses of NULL, you might have to use another technique that suits the situation.
Adressing the question in the comment on Bill Karwin's answer:
I want to output a 1 if the NEW and OLD value differ and a 0 if they are the same. But (for my purposes) I want to also return 0 for two NULLS.
select
Case When (:New = :Old) or
(:New is NULL and :Old is NULL) then 0
Else
1
End
from dual
In a WHERE CLAUSE you can put a condition like this,
WHERE column1 LIKE NVL(any_column_or_param, '%')
Perhaps DECODE() would suit your purpose here?
WITH t1 AS (SELECT 1 ID, NULL val FROM dual UNION ALL
SELECT 2 ID, NULL val FROM dual UNION ALL
SELECT 3 ID, 1 val FROM dual UNION ALL
SELECT 4 ID, 2 val FROM dual UNION ALL
SELECT 5 ID, 5 val FROM dual),
t2 AS (SELECT 1 ID, NULL val FROM dual UNION ALL
SELECT 2 ID, 3 val FROM dual UNION ALL
SELECT 3 ID, 1 val FROM dual UNION ALL
SELECT 4 ID, 4 val FROM dual UNION ALL
SELECT 6 ID, 5 val FROM dual)
SELECT t1.id t1_id,
t1.val t1_val,
t2.id t2_id,
t2.val t2_val,
DECODE(t1.val, t2.val, 0, 1) different_vals
FROM t1
FULL OUTER JOIN t2 ON t1.id = t2.id
ORDER BY COALESCE(t1.id, t2.id);
T1_ID T1_VAL T2_ID T2_VAL DIFFERENT_VALS
---------- ---------- ---------- ---------- --------------
1 1 0
2 2 3 1
3 1 3 1 0
4 2 4 4 1
5 5 1
6 5 1

T-SQL "Dynamic" Join

Given the following SQL Server table with a single char(1) column:
Value
------
'1'
'2'
'3'
How do I obtain the following results in T-SQL?
Result
------
'1+2+3'
'1+3+2'
'2+1+3'
'2+3+1'
'3+2+1'
'3+1+2'
This needs to be dynamic too, so if my table only holds rows '1' and '2' I'd expect:
Result
------
'1+2'
'2+1'
It seems like I should be able to use CROSS JOIN to do this, but since I don't know how many rows there will be ahead of time, I'm not sure how many times to CROSS JOIN back on myself..?
SELECT a.Value + '+' + b.Value
FROM MyTable a
CROSS JOIN MyTable b
WHERE a.Value <> b.Value
There will always be less than 10 (and really more like 1-3) rows at any given time. Can I do this on-the-fly in SQL Server?
Edit: ideally, I'd like this to happen in a single stored proc, but if I have to use another proc or some user defined functions to pull this off I'm fine with that.
This SQL will compute the permutations without repetitions:
WITH recurse(Result, Depth) AS
(
SELECT CAST(Value AS VarChar(100)), 1
FROM MyTable
UNION ALL
SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1
FROM MyTable a
INNER JOIN recurse r
ON CHARINDEX(a.Value, r.Result) = 0
)
SELECT Result
FROM recurse
WHERE Depth = (SELECT COUNT(*) FROM MyTable)
ORDER BY Result
If MyTable contains 9 rows, it will take some time to compute, but it will return 362,880 rows.
Update with explanation:
The WITH statement is used to define a recursive common table expression. In effect, the WITH statement is looping multiple times performing a UNION until the recursion is finished.
The first part of SQL sets the starting records. Assuming 3 rows named 'A', 'B', and 'C' in MyTable, this will generate these rows:
Result Depth
------ -----
A 1
B 1
C 1
Then the next block of SQL performs the first level of recursion:
SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1
FROM MyTable a
INNER JOIN recurse r
ON CHARINDEX(a.Value, r.Result) = 0
This takes all of the records generated so far (which will be in the recurse table) and joins them to all of the records in MyTable again. The ON clause filters the list of records in MyTable to only return the ones that do not exist already in this row's permutation. This would result in these rows:
Result Depth
------ -----
A 1
B 1
C 1
A+B 2
A+C 2
B+A 2
B+C 2
C+A 2
C+B 2
Then the recursion loops again giving these rows:
Result Depth
------ -----
A 1
B 1
C 1
A+B 2
A+C 2
B+A 2
B+C 2
C+A 2
C+B 2
A+B+C 3
A+C+B 3
B+A+C 3
B+C+A 3
C+A+B 3
C+B+A 3
At this point, the recursion stops because the UNION does not create any more rows because the CHARINDEX will always be 0.
The last SQL filters all of the resulting rows where the computed Depth column matches the # of records in MyTable. This throws out all of the rows except for the ones generated by the last depth of recursion. So the final result will be these rows:
Result
------
A+B+C
A+C+B
B+A+C
B+C+A
C+A+B
C+B+A
You can do this with a recursive CTE:
with t as (
select 'a' as value union all
select 'b' union all
select 'c'
),
const as (select count(*) as cnt from t),
cte as (
select cast(value as varchar(max)) as value, 1 as level
from t
union all
select cte.value + '+' + t.value, 1 + level
from cte join
t
on '+'+cte.value+'+' not like '%+'+t.value+'+%' cross join
const
where level <= const.cnt
)
select cte.value
from cte cross join
const
where level = const.cnt;

SELECT DISTINCT for data groups

I have following table:
ID Data
1 A
2 A
2 B
3 A
3 B
4 C
5 D
6 A
6 B
etc. In other words, I have groups of data per ID. You will notice that the data group (A, B) occurs multiple times. I want a query that can identify the distinct data groups and number them, such as:
DataID Data
101 A
102 A
102 B
103 C
104 D
So DataID 102 would resemble data (A,B), DataID 103 would resemble data (C), etc. In order to be able to rewrite my original table in this form:
ID DataID
1 101
2 102
3 102
4 103
5 104
6 102
How can I do that?
PS. Code to generate the first table:
CREATE TABLE #t1 (id INT, data VARCHAR(10))
INSERT INTO #t1
SELECT 1, 'A'
UNION ALL SELECT 2, 'A'
UNION ALL SELECT 2, 'B'
UNION ALL SELECT 3, 'A'
UNION ALL SELECT 3, 'B'
UNION ALL SELECT 4, 'C'
UNION ALL SELECT 5, 'D'
UNION ALL SELECT 6, 'A'
UNION ALL SELECT 6, 'B'
In my opinion You have to create a custom aggregate that concatenates data (in case of strings CLR approach is recommended for perf reasons).
Then I would group by ID and select distinct from the grouping, adding a row_number()function or add a dense_rank() your choice. Anyway it should look like this
with groupings as (
select concat(data) groups
from Table1
group by ID
)
select groups, rownumber() over () from groupings
The following query using CASE will give you the result shown below.
From there on, getting the distinct datagroups and proceeding further should not really be a problem.
SELECT
id,
MAX(CASE data WHEN 'A' THEN data ELSE '' END) +
MAX(CASE data WHEN 'B' THEN data ELSE '' END) +
MAX(CASE data WHEN 'C' THEN data ELSE '' END) +
MAX(CASE data WHEN 'D' THEN data ELSE '' END) AS DataGroups
FROM t1
GROUP BY id
ID DataGroups
1 A
2 AB
3 AB
4 C
5 D
6 AB
However, this kind of logic will only work in case you the "Data" values are both fixed and known before hand.
In your case, you do say that is the case. However, considering that you also say that they are 1000 of them, this will be frankly, a ridiculous looking query for sure :-)
LuckyLuke's suggestion above would, frankly, be the more generic way and probably saner way to go about implementing the solution though in your case.
From your sample data (having added the missing 2,'A' tuple, the following gives the renumbered (and uniqueified) data:
with NonDups as (
select t1.id
from #t1 t1 left join #t1 t2
on t1.id > t2.id and t1.data = t2.data
group by t1.id
having COUNT(t1.data) > COUNT(t2.data)
), DataAddedBack as (
select ID,data
from #t1 where id in (select id from NonDups)
), Renumbered as (
select DENSE_RANK() OVER (ORDER BY id) as ID,Data from DataAddedBack
)
select * from Renumbered
Giving:
1 A
2 A
2 B
3 C
4 D
I think then, it's a matter of relational division to match up rows from this output with the rows in the original table.
Just to share my own dirty solution that I'm using for the moment:
SELECT DISTINCT t1.id, D.data
FROM #t1 t1
CROSS APPLY (
SELECT CAST(Data AS VARCHAR) + ','
FROM #t1 t2
WHERE t2.id = t1.id
ORDER BY Data ASC
FOR XML PATH('') )
D ( Data )
And then going analog to LuckyLuke's solution.