SQL statement that allows me to output multiple columns as one row - sql

I have table in my database that goes like this:
ID | VARIANT | SIFRANT | VALUE
When I call
SELECT * FROM example_table
I get each row on its own. But records in my database can have the same VARIANT. And I'd like to output those records them in the same row.
for example if I have
ID | VARIANT | SIFRANT | VALUE
1 | 3 | 5 | 50
2 | 3 | 6 | 49
3 | 3 | 1 | 68
I'd like the output to be
VARIANT | VALUES_5 | VALUES_6 | VALUES_1
3 | 50 | 49 | 68
EDIT: I found the solution using PIVOT, the code goes like this:
select *
from (
select variant, VALUE, SIFRANT
from example_table
)
pivot
(
max(VALUE)
for SIFRANT
in ('1','2','3','4','5','6','7','8','9','10')
)

It seems that you only need an aggregation on your data:
with test(ID, VARIANT, SIFRANT, VALUE) as
(
select 1, 3, 5, 50 from dual union all
select 2, 3, 6, 49 from dual union all
select 3, 3, 1, 68 from dual
)
select variant, listagg (value, ' ') within group ( order by id)
from test
group by variant

Related

Possible to use a column name in a UDF in SQL?

I have a query in which a series of steps is repeated constantly over different columns, for example:
SELECT DISTINCT
MAX (
CASE
WHEN table_2."GRP1_MINIMUM_DATE" <= cohort."ANCHOR_DATE" THEN 1
ELSE 0
END)
OVER (PARTITION BY cohort."USER_ID")
AS "GRP1_MINIMUM_DATE",
MAX (
CASE
WHEN table_2."GRP2_MINIMUM_DATE" <= cohort."ANCHOR_DATE" THEN 1
ELSE 0
END)
OVER (PARTITION BY cohort."USER_ID")
AS "GRP2_MINIMUM_DATE"
FROM INPUT_COHORT cohort
LEFT JOIN INVOLVE_EVER table_2 ON cohort."USER_ID" = table_2."USER_ID"
I was considering writing a function to accomplish this as doing so would save on space in my query. I have been reading a bit about UDF in SQL but don't yet understand if it is possible to pass a column name in as a parameter (i.e. simply switch out "GRP1_MINIMUM_DATE" for "GRP2_MINIMUM_DATE" etc.). What I would like is a query which looks like this
SELECT DISTINCT
FUNCTION(table_2."GRP1_MINIMUM_DATE") AS "GRP1_MINIMUM_DATE",
FUNCTION(table_2."GRP2_MINIMUM_DATE") AS "GRP2_MINIMUM_DATE",
FUNCTION(table_2."GRP3_MINIMUM_DATE") AS "GRP3_MINIMUM_DATE",
FUNCTION(table_2."GRP4_MINIMUM_DATE") AS "GRP4_MINIMUM_DATE"
FROM INPUT_COHORT cohort
LEFT JOIN INVOLVE_EVER table_2 ON cohort."USER_ID" = table_2."USER_ID"
Can anyone tell me if this is possible/point me to some resource that might help me out here?
Thanks!
There is no such direct as #Tejash already stated, but the thing looks like your database model is not ideal - it would be better to have a table that has USER_ID and GRP_ID as keys and then MINIMUM_DATE as seperate field.
Without changing the table structure, you can use UNPIVOT query to mimic this design:
WITH INVOLVE_EVER(USER_ID, GRP1_MINIMUM_DATE, GRP2_MINIMUM_DATE, GRP3_MINIMUM_DATE, GRP4_MINIMUM_DATE)
AS (SELECT 1, SYSDATE, SYSDATE, SYSDATE, SYSDATE FROM dual UNION ALL
SELECT 2, SYSDATE-1, SYSDATE-2, SYSDATE-3, SYSDATE-4 FROM dual)
SELECT *
FROM INVOLVE_EVER
unpivot ( minimum_date FOR grp_id IN ( GRP1_MINIMUM_DATE AS 1, GRP2_MINIMUM_DATE AS 2, GRP3_MINIMUM_DATE AS 3, GRP4_MINIMUM_DATE AS 4))
Result:
| USER_ID | GRP_ID | MINIMUM_DATE |
|---------|--------|--------------|
| 1 | 1 | 09/09/19 |
| 1 | 2 | 09/09/19 |
| 1 | 3 | 09/09/19 |
| 1 | 4 | 09/09/19 |
| 2 | 1 | 09/08/19 |
| 2 | 2 | 09/07/19 |
| 2 | 3 | 09/06/19 |
| 2 | 4 | 09/05/19 |
With this you can write your query without further code duplication and if you need use PIVOT-syntax to get one line per USER_ID.
The final query could then look like this:
WITH INVOLVE_EVER(USER_ID, GRP1_MINIMUM_DATE, GRP2_MINIMUM_DATE, GRP3_MINIMUM_DATE, GRP4_MINIMUM_DATE)
AS (SELECT 1, SYSDATE, SYSDATE, SYSDATE, SYSDATE FROM dual UNION ALL
SELECT 2, SYSDATE-1, SYSDATE-2, SYSDATE-3, SYSDATE-4 FROM dual)
, INPUT_COHORT(USER_ID, ANCHOR_DATE)
AS (SELECT 1, SYSDATE-1 FROM dual UNION ALL
SELECT 2, SYSDATE-2 FROM dual UNION ALL
SELECT 3, SYSDATE-3 FROM dual)
-- Above is sampledata query starts from here:
, unpiv AS (SELECT *
FROM INVOLVE_EVER
unpivot ( minimum_date FOR grp_id IN ( GRP1_MINIMUM_DATE AS 1, GRP2_MINIMUM_DATE AS 2, GRP3_MINIMUM_DATE AS 3, GRP4_MINIMUM_DATE AS 4)))
SELECT qcsj_c000000001000000 user_id, GRP1_MINIMUM_DATE, GRP2_MINIMUM_DATE, GRP3_MINIMUM_DATE, GRP4_MINIMUM_DATE
FROM INPUT_COHORT cohort
LEFT JOIN unpiv table_2
ON cohort.USER_ID = table_2.USER_ID
pivot (MAX(CASE WHEN minimum_date <= cohort."ANCHOR_DATE" THEN 1 ELSE 0 END) AS MINIMUM_DATE
FOR grp_id IN (1 AS GRP1,2 AS GRP2,3 AS GRP3,4 AS GRP4))
Result:
| USER_ID | GRP1_MINIMUM_DATE | GRP2_MINIMUM_DATE | GRP3_MINIMUM_DATE | GRP4_MINIMUM_DATE |
|---------|-------------------|-------------------|-------------------|-------------------|
| 3 | | | | |
| 1 | 0 | 0 | 0 | 0 |
| 2 | 0 | 1 | 1 | 1 |
This way you only have to write your calculation logic once (see line starting with pivot).

Oracle Sql: Obtain a Sum of a Group, if Subgroup condition met

I have a dataset upon which I am trying to obain a summed value for each group, if a subgroup within each group meets a certain condition. I am not sure if this is possible, or if I am approaching this problem incorrectly.
My data is structured as following:
+----+-------------+---------+-------+
| ID | Transaction | Product | Value |
+----+-------------+---------+-------+
| 1 | A | 0 | 10 |
| 1 | A | 1 | 15 |
| 1 | A | 2 | 20 |
| 1 | B | 1 | 5 |
| 1 | B | 2 | 10 |
+----+-------------+---------+-------+
In this example I want to obtain the sum of values by the ID column, if a transaction does not contain any products labeled 0. In the above described scenario, all values related to Transaction A would be excluded because Product 0 was purchased. With the outcome being:
+----+-------------+
| ID | Sum of Value|
+----+-------------+
| 1 | 15 |
+----+-------------+
This process would repeat for multiple IDs with each ID only containing the sum of values if the transaction does not contain product 0.
Hmmm . . . one method is to use not exists for the filtering:
select id, sum(value)
from t
where not exists (select 1
from t t2
where t2.id = t.id and t2.transaction = t.transaction and
t2.product = 0
)
group by id;
Do not need to use correlated subquery with not exists.
Just use group by.
with s (id, transaction, product, value) as (
select 1, 'A', 0, 10 from dual union all
select 1, 'A', 1, 15 from dual union all
select 1, 'A', 2, 20 from dual union all
select 1, 'B', 1, 5 from dual union all
select 1, 'B', 2, 10 from dual)
select id, sum(sum_value) as sum_value
from
(select id, transaction,
sum(value) as sum_value
from s
group by id, transaction
having count(decode(product, 0, 1)) = 0
)
group by id;
ID SUM_VALUE
---------- ----------
1 15

To count a column based on another column's repeating(same) entry

I want to create a report of calls last made based on weeks from last call and call-Group
Actual Data is like below with call id, date of call and call grouping
callid | Date | Group
----------------------------
1 | 6-1-18 | a1
2 | 6-1-18 | a2
3 | 7-1-18 | a3
4 | 8-1-18 | a1
5 | 9-1-18 | a2
6 | 9-1-18 | a4
Expected data is to display the number of calls for each call group corresponding to the number of week from last call
week | |
from | |
last |Group|Group
call | a1 | a2
--------------------
1 | 2 | 2 ->number of calls
2 | - | -
3 | 1 | -
4 | 2 | -
5 | - | 3
6 | - | -
Can anyone please tell me some solution for this
Although you data provided was a very small set and not rich enough to cover all cases, here is an sql that will calculate the number of weeks difference between each call and last call within a group and count the number of calls for each group for the particular week difference.
with your_table as (
select 1 as "callid", to_date('6-1-18','dd-mm-rr') as "date", 'a1' as "group" from dual
union select 2, to_date('6-1-18','mm-dd-rr'), 'a2' from dual
union select 3, to_date('7-1-18','mm-dd-rr'), 'a3' from dual
union select 4, to_date('8-1-18','mm-dd-rr'), 'a1' from dual
union select 5, to_date('9-1-18','mm-dd-rr'), 'a2' from dual
union select 6, to_date('6-1-18','mm-dd-rr'), 'a4' from dual
),
data1 as (
select t.*, max(t."date") over (partition by t."group") last_call_dt from your_table t
),
data2 as (select t.*, round((last_call_dt-t."date")/7,0) as weeks_diff from data1 t)
select * from (
select t.weeks_diff, t."callid", t."group" from data2 t
)
PIVOT
(
COUNT("callid")
FOR "group" IN ('a1', 'a2', 'a3','a4')
)
order by weeks_diff
to try it out with your table just make the following change:
with your_table as (select * from my_table), ....
let me know how it goes :)

SQL:How to dynamically return error code for records which doesn't exist in table

I am trying to replicate a workplace scenario. The sqlfiddle for Oracle db is not working so I couldn't recreate the table.
Say I have a table like below
Table1
+----+------+
| ID | Col1 |
+----+------+
| 1 | A |
| 2 | B |
| 3 | C |
+----+------+
Now we run a query with where condition. The in clause for where is passed by user and run time and can change.
Suppose user inputs 1,2,4,5
So the SQL will be like
select t.* from Table1 t where t.id in (1,2,4,5);
The result of this query will be
+----+------+
| ID | Col1 |
+----+------+
| 1 | A |
| 2 | B |
+----+------+
Now output I am expecting should be something like below
+----+---------+------+
| ID | ErrCode | Col1 |
+----+---------+------+
| 1 | 0 | A |
| 2 | 0 | B |
| 4 | 404 | |
| 5 | 404 | |
+----+---------+------+
As 3 was not entered by user, we will not return it. But for 4 and 5, there is no record in our table, so I want to create another dummy column which will contain error code. The data columns should be null.
It is not mandatory that the user input should go to in clause. We can use it anywhere in the query.
I am thinking of some way of splitting the input id and use them as rows. Then use them to do left join with Table1 to find the records which exists and doesn't exist in Table1 and use case on that to decide among 0 or 404 as error code.
Appreciate any other way we can do it by query.
Here it goes
SQL> WITH table_filter AS
2 (SELECT regexp_substr(txt, '[^,]+', 1, LEVEL) id
3 FROM (SELECT '1,2,4,5' AS txt FROM dual) -- User input here
4 CONNECT BY regexp_substr(txt, '[^,]+', 1, LEVEL) IS NOT NULL),
5 table1 AS -- Sample data
6 (SELECT 1 id,
7 'A' col1
8 FROM dual
9 UNION ALL
10 SELECT 2,
11 'B'
12 FROM dual
13 UNION ALL
14 SELECT 3,
15 'C'
16 FROM dual)
17 SELECT f.id,
18 CASE
19 WHEN t.id IS NULL THEN
20 404
21 ELSE
22 0
23 END AS err_code,
24 t.col1
25 FROM table_filter f
26 LEFT OUTER JOIN table1 t
27 ON t.id = f.id;
ID ERR_CODE COL1
---------------------------- ---------- ----
1 0 A
2 0 B
5 404
4 404
SQL>
Oracle Setup:
CREATE TABLE Table1 ( id, col1 ) AS
SELECT 1, 'A' FROM DUAL UNION ALL
SELECT 2, 'B' FROM DUAL;
Query:
SELECT i.COLUMN_VALUE AS id,
NVL2( t.col1, 0, 404 ) AS ErrCode,
t.col1
FROM TABLE( SYS.ODCINUMBERLIST( 1, 2, 4, 5 ) ) i
LEFT OUTER JOIN
Table1 t
ON ( i.COLUMN_VALUE = t.id );
Output:
ID ERRCODE COL1
-- ------- ----
1 0 A
2 0 B
4 404
5 404
The collection of ids can be built dynamically using PL/SQL or an external language and then passed as a bind variable. See my answer here for an example.

SQL query update by grouping

I'm dealing with some legacy data in an Oracle table and have the following
--------------------------------------------
| RefNo | ID |
--------------------------------------------
| FOO/BAR/BAZ/AAAAAAAAAA | 1 |
| FOO/BAR/BAZ/BBBBBBBBBB | 1 |
| FOO/BAR/BAZ/CCCCCCCCCC | 1 |
| FOO/BAR/BAZ/DDDDDDDDDD | 1 |
--------------------------------------------
For each of the /FOO/BAR/BAZ/% records I want to make the ID a Unique incrementing number.
Is there a method to do this in SQL?
Thanks in advance
EDIT
Sorry for not being specific. I have several groups of records /FOO/BAR/BAZ/, /FOO/ZZZ/YYY/. The same transformation needs to occur for each of these other (example) groups. The recnum can't be used I want ID to start from 1, incrementing, for each group of records I have to change.
Sorry for making a mess of my first post. Output should be
--------------------------------------------
| RefNo | ID |
--------------------------------------------
| FOO/BAR/BAZ/AAAAAAAAAA | 1 |
| FOO/BAR/BAZ/BBBBBBBBBB | 2 |
| FOO/BAR/BAZ/CCCCCCCCCC | 3 |
| FOO/BAR/BAZ/DDDDDDDDDD | 4 |
| FOO/ZZZ/YYY/AAAAAAAAAA | 1 |
| FOO/ZZZ/YYY/BBBBBBBBBB | 2 |
--------------------------------------------
Let's try something like this(Oracle version 10g and higher):
SQL> with t1 as(
2 select 'FOO/BAR/BAZ/AAAAAAAAAA' as RefNo, 1 as ID from dual union all
3 select 'FOO/BAR/BAZ/BBBBBBBBBB', 1 from dual union all
4 select 'FOO/BAR/BAZ/CCCCCCCCCC', 1 from dual union all
5 select 'FOO/BAR/BAZ/DDDDDDDDDD', 1 from dual union all
6 select 'FOO/ZZZ/YYY/AAAAAAAAAA', 1 from dual union all
7 select 'FOO/ZZZ/YYY/BBBBBBBBBB', 1 from dual union all
8 select 'FOO/ZZZ/YYY/CCCCCCCCCC', 1 from dual union all
9 select 'FOO/ZZZ/YYY/DDDDDDDDDD', 1 from dual
10 )
11 select row_number() over(partition by ComPart order by DifPart) as id
12 , RefNo
13 From (select regexp_substr(RefNo, '[[:alpha:]]+$') as DifPart
14 , regexp_substr(RefNo, '([[:alpha:]]+/)+') as ComPart
15 , RefNo
16 , Id
17 from t1
18 ) q
19 ;
ID REFNO
---------- -----------------------
1 FOO/BAR/BAZ/AAAAAAAAAA
2 FOO/BAR/BAZ/BBBBBBBBBB
3 FOO/BAR/BAZ/CCCCCCCCCC
4 FOO/BAR/BAZ/DDDDDDDDDD
1 FOO/ZZZ/YYY/AAAAAAAAAA
2 FOO/ZZZ/YYY/BBBBBBBBBB
3 FOO/ZZZ/YYY/CCCCCCCCCC
4 FOO/ZZZ/YYY/DDDDDDDDDD
I think that actual updating the ID column wouldn't be a good idea. Every time you add new groups of data you would have to run the update statement again. The better way would be creating a view and you will see desired output every time you query it.
rownum can be used as an incrementing ID?
UPDATE legacy_table
SET id = ROWNUM;
This will assign unique values to all records in the table. This link contains documentation about Oracle Pseudocolumn.
You can run the following:
update <table_name> set id = rownum where descr like 'FOO/BAR/BAZ/%'
This is pretty rough and I'm not sure if your RefNo is a single value column or you just made it like that for simplicity.
select
sub.RefNo
row_number() over (order by sub.RefNo) + (select max(id) from TABLE),
from (
select FOO+'/'+BAR+'/'+BAZ+'/'+OTHER as RefNo
from TABLE
group by FOO+'/'+BAR+'/'+BAZ+'/'+OTHER
) sub