Need two rows after join from single row based on two columns - sql

I have a table1 with three columns and a table2 with single column.
If the value of first column is Y then I need a particular value from table 2 as a row in another table after join and if the second column is Y then I need a particular value from table 2 as another row in 3rd table after join. There is no common column in both the tables.
If two columns are in a row have Y as value then I need two rows in the final table after join. I'm using case right now for joining, but only one column is getting checked.
Can someone help me with this?
table1
--------------------
col1 col2 col3(pk)
--------------------
y n 123
y y 456
table2
--------------------
col1
--------------------
col1Y
col2Y
Expected output
table1
--------------------
col1 col2
--------------------
123 col1Y
456 col1Y
456 col2Y

select col3 as col1, 'col1y' as col2 from myTable where col1 = 'y'
union
select col3 as col1, 'col2y' as col2 from myTable where col2 = 'y'
--order by col1, col2;
SQLFiddle sample

you also can check how to transpose tables with pivot command
SQL transpose full table

We can unpivot and join to get the results you're looking for:
declare #table1 table (col1 char(1),col2 char(1),col3 int)
insert into #table1(col1,col2,col3) values
('y','n',123) ,
('y','y',456)
declare #table2 table (col1 char(5))
insert into #table2 (col1) values
('col1Y'),('col2Y')
select
u.col3 as col2,t2.col1 as col2
from
#table1 t1
unpivot
(cval for cname in (col1,col2)) u
cross apply
(select cname + cval as Complete) v
inner join
#table2 t2
on
v.complete = t2.col1
Result:
col1 col2
----------- -----
123 col1Y
456 col1Y
456 col2Y
But after the unpivot and cross apply, we didn't really need table2 at all (we could have just filtered down to rows where cval is Y). But for now I've included it in case I'm missing something or there's more to build up in the query.

Not sure if you need table2 but here is where you could do it with case statement.
SELECT col1, col2 from (
SELECT
CASE col1
WHEN 'y' THEN col3
ELSE 'null'
END AS col1,
CASE col1
WHEN 'y' THEN 'col1Y'
ELSE 'null'
END AS col2
from table1 as tbl1
union all
select
CASE col2
WHEN 'y' THEN col3
ELSE 'null'
END AS col1,
CASE col2
WHEN 'y' THEN 'col2Y'
ELSE 'null'
END AS col2
FROM table1 as tbl2) as tbl
where tbl.col1 <> 'null';
SQL Fiddle Sample

Related

How to use a variable as a select statement and then sum multiple values in a variable?

I have multiple rows with the following columns:
col1
col2
col3
col4
I want to say if rows where col3 equals col2 of other rows where col1 equals 'a111', then sum col4 of the rows where col3 equals col2 of other rows where col1 equals 'a111', and then rename the sum column to "Total".
Example table with the four columns and four rows:
col1 col2 col3 col4
---- ---- ---- ----
a222 a333 4444
a111 a333
a555 a444 1111
a111 a444
I've tried the following but it does not work.
Declare
var1 = Select col2 from table1 where col1='a111';
var2 = Select col3 from table1 where col3=var1;
var3 = Select col4 from table1 where col3=var1;
Begin
If var2=var1
Then Select SUM(var3) As "Total";
End
Expected result is:
Total
5555
I do not have the strongest of knowledge in programming overall or Oracle. Please ask any questions and I will do my best to answer.
Your logic is convoluted and hard to follow without an example of the data you have and an example of the data you want.. but translating your pseudocode into sql gives:
Declare
var1 = Select col2 from table1 where col1='[table2.col2 value]';
Called "find" in my query
var2 = Select col3 from table1 where col3=var1;
var3 = Select col4 from table1 where col3=var1;
Achieved by joining the table back to the "find"
Begin
If var2=var1
Then Select SUM(var3) As "Total";
End
Achieved with a sum of var3 on only rows where var1=var2, in "ifpart"
SELECT SUM(var3) FROM
(
SELECT alsot1.col3 as var2, alsot1.col4 as var3
FROM
table1 alsot1
INNER JOIN
(
SELECT t1.col2 as var1
FROM table1 t1 INNER JOIN table2 t2 ON t1.col1 = t2.col2
) find
ON find.var1 = alsot1.col3
) ifpart
WHERE
var1 = var2
This could be simplified, but I present it like this because it matches your understanding of the problem. The query optimizer will rewrite it anyway when the time comes to run it so it only pays to start messing with how it's done if performance is poor
By the way, you clearly said that two tables join via a common named col2 but you then in your pseudocode said the tables join on col1=col2. I followed your pseudocode
This sounds like something that hierarchical queries could handle. E.g. something like:
WITH your_table AS (SELECT NULL col1, 'a222' col2, 'a333' col3, 4444 col4 FROM dual UNION ALL
SELECT 'a111' col1, 'a333' col2, NULL col3, NULL col4 FROM dual UNION ALL
SELECT NULL col1, 'a555' col2, 'a444' col3, 1111 col4 FROM dual UNION ALL
SELECT 'a111' col1, 'a444' col2, NULL col3, NULL col4 FROM dual UNION ALL
SELECT 'a666' col1, 'a888' col2, NULL col3, NULL col4 FROM dual UNION ALL
SELECT NULL col1, 'a777' col2, 'a888' col3, 7777 col4 FROM dual)
SELECT col1,
SUM(col4) col4_total
FROM (SELECT connect_by_root(col1) col1,
col4
FROM your_table
CONNECT BY col3 = PRIOR col2
START WITH col1 IS NOT NULL) -- start with col1 = 'a111')
GROUP BY col1;
COL1 COL4_TOTAL
---- ----------
a666 7777
a111 5555
Nevermind. I believe I've determined the answer myself. I over complicated what I wanted. Thank you anyway.
Answer:
Select Sum(col4) as "Total" from table1 where col3 in (Select col2 from table1 where col1='a111')

How to match the list of values in an IN clause with another IN clause in a linear manner

--Table_1
col1 col2
............
123 abc
456 def
123 def
select * from Table_1 where col1 in (123,456) and col2 in (abc,def);
I want the output to match the row containing just '123' from "col1" and 'abc' from "col2" , and not '123' from col1 and 'def' from 'col2'.
The list in IN clause should match accordingly in a linear manner.
select * from Table_1 where col1 in (123,456) and col2 in (abc,def);
O/P
col1 col2
123 abc
456 def
You may use tuples for comparison of a combination of multiple columns.
select *
from Table_1
where (col1,col2) in ( (123,'abc'),(456,'def'), (789,'abc') );
Demo
You can try to use row_number window function to make it.
SELECT col1,col2
from (
select col1,col2,row_number() over(partition by col1 order by col2) rn
from Table_1
where col1 in (123,456) and col2 in ('abc','def')
) t1
where rn = 1
sqlfiddle

How to get min value from multiple columns for a row in SQL

I need to get to first (min) date from a set of 4 (or more) columns.
I tried
select min (col1, col2, col3) from tbl
which is obviouslly wrong.
let's say I have these 4 columns
col1 | col2 | col3 | col4
1/1/17 | 2/2/17 | | 3/3/17
... in this case what I want to get is the value in col1 (1/1/17). and Yes, these columns can include NULLs.
I am running this in dashDB
the columns are Date data type,
there is no ID nor Primary key column in this table,
and I need to do this for ALL rows in my query,
the columns are NOT in order. meaning that col1 does NOT have to be before col2 or it has to be null AND col2 does NOT have to be before col3 or it has to be NULL .. and so on
If your DB support least function, it is the best approach
select
least
(
nvl(col1,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col2,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col3,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col4,TO_DATE('2901-01-01','YYYY-MM-DD'))
)
from tbl
Edit: If all col(s) are null, then you can hardcode the output as null. The below query should work. I couldn't test it but this should work.
select
case when
least
(
nvl(col1,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col2,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col3,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col4,TO_DATE('2901-01-01','YYYY-MM-DD'))
)
= TO_DATE('2901-01-01','YYYY-MM-DD')
then null
else
least
(
nvl(col1,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col2,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col3,TO_DATE('2901-01-01','YYYY-MM-DD')),
nvl(col4,TO_DATE('2901-01-01','YYYY-MM-DD'))
)
end
as min_date
from tbl
If a id column in your table. Then
Query
select t.id, min(t.col) as min_col_value from(
select id, col1 as col from your_table
union all
select id, col2 as col from your_table
union all
select id, col3 as col from your_table
union all
select id, col4 as col from your_table
)t
group by t.id;
If you want the first date, then use coalesce():
select coalesce(col1, col2, col3, col4)
from t;
This returns the first non-NULL value (which is one way that I interpret the question). This will be the minimum date, if the dates are in order.
Select Id, CaseWhen (Col1 <= Col2 OR Col2 is null) And (Col1 <= Col3 OR Col3 is null) Then Col1 When (Col2 <= Col1 OR Col1 is null) And (Col2 <= Col3 OR Col3 is null) Then Col2 Else Col3 End As Min From YourTable
This is for 3 Column, Same way you can write for 4 - or more column.

HAVING clause: at least one of the ungrouped values is X

Example table:
Col1 | Col2
A | Apple
A | Banana
B | Apple
C | Banana
Output:
A
I want to get all values of Col1 which have more than one entry and at least one with Banana.
I tried to use GROUP BY:
SELECT Col1
FROM Table
GROUP BY Col1
HAVING count(*) > 1
AND ??? some kind of ONEOF(Col2) = 'Banana'
How to rephrase the HAVING clause that my query works?
Use conditional aggregation:
SELECT Col1
FROM Table
GROUP BY Col1
HAVING COUNT(DISTINCT col2) > 1 AND
COUNT(CASE WHEN col2 = 'Banana' THEN 1 END) >= 1
You can conditionally check for Col1 groups having at least one 'Banana' value using COUNT with CASE expression inside it.
Please note that the first COUNT has to use DISTINCT, so that groups with at least two different Col1 values are detected. If by having more than one entry you mean also rows having the same Col2 values repeated more than one time, then you can skip DISTINCT.
SELECT Col1
FROM Table
GROUP BY Col1
HAVING count(*) > 1
AND Col1 in (select distinct Col1 from Table where Col2 = 'Banana');
Here is a simple approach:
SELECT Col1
FROM table
GROUP BY Col1
HAVING COUNT(DISTINCT CASE WHEN col2= 'Banana' THEN 1 ELSE 2 END) = 2
Try this,
declare #t table(Col1 varchar(20), Col2 varchar(20))
insert into #t values('A','Apple')
,('A','Banana'),('B','Apple'),('C','Banana')
select col1 from #t A
where exists
(select col1 from #t B where a.col1=b.col1 and b.Col2='Banana')
group by col1
having count(*)>1

Select a dummy column with a dummy value in SQL?

I have a table with the following
Table1
col1 col2
------------
1 A
2 B
3 C
0 D
Result
col1 col2 col3
------------------
0 D ABC
I am not sure how to go about writing the query , col1 and col2 can be selected by this
select col1, col2 from Table1 where col1 = 0;
How should I go about adding a col3 with value ABC.
Try this:
select col1, col2, 'ABC' as col3 from Table1 where col1 = 0;
If you meant just ABC as simple value, answer above is the one that works fine.
If you meant concatenation of values of rows that are not selected by your main query, you will need to use a subquery.
Something like this may work:
SELECT t1.col1,
t1.col2,
(SELECT GROUP_CONCAT(col2 SEPARATOR '') FROM Table1 t2 WHERE t2.col1 != 0) as col3
FROM Table1 t1
WHERE t1.col1 = 0;
Actual syntax maybe a bit off though