I have a case statement
Select customer, group, case when group = one then 'A' else 'B' end as Indicator FROM TABLE1
How do I "flatten" the indicator so for each customer I have 2 column for each indicator type (Goal Table)
Current Table:
Customer
Group
Indicator
Joh
One
A
Joh
Two
B
Jane
One
A
Jane
Two
B
Goal Table:
Customer
Indicator1
Indicator2
Joh
A
B
Jane
A
B
Since values are being hard-coded ('A','B') for indicator column, we can use max, as it will yield one value only -
with data_cte(Customer,Group_1,Indicator) as(
select * from values
('Joh','One','A'),
('Joh','Two','B'),
('Jane','One','A'),
('Jane','Two','B')
)select d.customer
,max(case when d.group_1 = 'One' then 'A' end) as indicator1
,max(case when d.group_1 = 'Two' then 'B' end) as indicator2
from data_cte d
group by d.customer;
The form of Pankaj's answer is good if you have fixed group's, but his code has the indicator values hard coded, this it should look like:
with data_cte(Customer, Group_1, Indicator) as (
select *
from values
('Joh','One','A'),
('Joh','Two','B'),
('Jane','One','A'),
('Jane','Two','B')
)
select
d.customer
,max(case when d.group_1 = 'One' then d.indicator end) as indicator1
,max(case when d.group_1 = 'Two' then d.indicator end) as indicator2
from data_cte as d
group by 1;
The CASE in the MAX can be swapped for a IFF in the form
MAX(IFF(d.group_1 = 'One` then d.indicator, null)) as indicator1
This works as MAX takes the larest value, so if you only have one matching group_1 per customer, the other will be null and those are not larger so the wanted value is taken.
If you have many, you will want to somehow rank then, and then FIRST_VALUE with a partition on customer, and ordered by something like a date..
anyways, if you have unkown/dynamic columns this can be solve using Snowflake Scripting to double query the data.
create or replace table table1 as
select column1 customer, column2 as _group, column3 as indicator
from values
('Joh',1,'A'),
('Joh',2,'B'),
('Jane',1,'C'),
('Jane',3,'E'),
('Jane',2,'D');
declare
sql string;
res resultset;
c1 cursor for select distinct _group as key from table1 order by key;
begin
sql := 'select customer ';
for record in c1 do
sql := sql || ',max(iff(_group = '|| record.key ||', indicator, null)) as col_' || record.key::text;
end for;
sql := sql || ' from table1 group by 1 order by 1';
res := (execute immediate :sql);
return table (res);
end;
gives:
CUSTOMER
COL_1
COL_2
COL_3
Jane
C
D
E
Joh
A
B
null
Related
I have a table T_REF which contains the following data.
select * from T_REF
order by invent_status nulls first;
DIV REF INVENT_STATUS
---- --------- -------------
1 REF001XT NULL
1 REF001XT A
How to get INVENT_STATUS as A for the following.
If there is no 2nd row.
INVENT_STATUS is anything other than A for 2nd row.
The SQL must not change the first NULL if the second row contains an A.
So basically, I need an SQL that replaces an A for the NULL if there is no A in the result of the SQL.
If I understood correctly you need this:
select div, ref, invent_status,
case when invent_status is null and
count(case when invent_status = 'A' then 1 end) over () = 0
then 'A'
else invent_status
end as new_status
from t_ref
order by invent_status nulls first;
demo
Conditional, analytical function count(case when invent_status = 'A' then 1 end) over () checks if there are any A in your table. If no and if current status is null then it is replaced by A.
sample table:
create table NS_11(
div int ,ref varchar(10),INVENT_STATUS varchar(1));
insert into NS_11(div,ref) values(1,'REF001XT');
insert into NS_11 values(1,'REF002XT','A');
insert into NS_11 values(1,'REF003XT','B');
insert into NS_11 values(1,'REF004XT','C');
insert into NS_11(div,ref) values(1,'REF005XT');
insert into NS_11(div,ref) values(1,'REF006XT');
select * from NS_11;
select div,ref,nvl(INVENT_STATUS,'A') from(select div,ref,INVENT_STATUS from NS_11
minus select div,ref,INVENT_STATUS from NS_11 where rownum<=1 )
union all
select div,ref,INVENT_STATUS from NS_11 where rownum<=1;
sample output:
1 REF002XT A
1 REF003XT B
1 REF004XT C
1 REF005XT A
1 REF006XT A
1 REF001XT (null)
this query will work for your table:
select div,ref,INVENT_STATUS from T_REF where rownum<=1
union
select div,ref,nvl(INVENT_STATUS,'A') from(select div,ref,INVENT_STATUS from T_REF
minus select div,ref,INVENT_STATUS from T_REF where rownum<=1 );
select DIV, REF,
case
when (INVENT_STATUS is null) then 'A'
when INVENT_STATUS = 'A' then null
when INVENT_STATUS != 'A' then INVENT_STATUS
else INVENT_STATUS
end as INVENT_STATUS
from
t_ref
order by INVENT_STATUS nulls first;
I have this following table
Dept---------- Sub_Dept---- Dept Type
Sales.............Advertising........A
Sales.............Marketing......... B
Sales.............Analytics.......... C
Operations.....IT..................... C
Operations.....Settlement........C
And the result should be if a department got a department type as A then change all record of that department to A, else keep it same
Dept---------- Sub_Dept---- Dept Type
Sales.............Advertising........A
Sales.............Marketing......... A
Sales.............Analytics.......... A
Operations.....IT..................... C
Operations.....Settlement........C
Anybody can give a suggestion on this? I thought of using the GROUP BY but have to output the Sub Department as well
Thanks a lot
I would do:
update t
set depttype = 'a'
where exists (select 1 from t t2 where t2.dept = t.dept and t2.dept = 'a') and
t.dept <> 'a';
If you just want a select, then do:
select t.*,
(case when sum(case when depttype = 'a' then 1 else 0 end) over (partition by dept) > 1
then 'a'
else depttype
end) as new_depttype
from t;
Use below query
select a11.dept, a12.Sub_Dept, (case when a12.min_dep_type='A' then 'A' else a11.dep_type) as dep_type
from tab a11
JOIN (select dept, min(dep_type) min_dep_type from tab group by dept) a12
on a11.dept = a12.dept
Try this:
update table
set depttype= case when dept in (select dept from table where depttype='a') then 'a' else depttype end
This should work:
select a.dept, a.sub_dept,
case when b.dept is not null then 'A' else dept_type end as dept_type
from aTable a
left join(
select distinct Dept from aTable where dept_type = 'A'
)
b on b.dept = a.dept
You could use analytic functions to check whether exists the specific value in the group.
Try below query:
SELECT t.Dept,
t.Sub_Dept,
NVL(MIN(CASE WHEN t.Dept_Type = 'A'
THEN Dept_Type END) OVER (PARTITION BY t.Dept), t.Dept_Type) AS Dept_Type
FROM table_1 t
Using the analytic function MIN(), you can search for the value of 'A' (if it does exist inside the group). MIN works for non-null values only, so if you don't have any 'A' in the group, the result will be NULL.
At this point, you can use NVL to choose whether to print the value found in the group or the actual dept_type of the row.
I want to do something like this:
select id,
count(*) as total,
FOR temp IN SELECT DISTINCT somerow FROM mytable ORDER BY somerow LOOP
sum(case when somerow = temp then 1 else 0 end) temp,
END LOOP;
from mytable
group by id
order by id
I created working select:
select id,
count(*) as total,
sum(case when somerow = 'a' then 1 else 0 end) somerow_a,
sum(case when somerow = 'b' then 1 else 0 end) somerow_b,
sum(case when somerow = 'c' then 1 else 0 end) somerow_c,
sum(case when somerow = 'd' then 1 else 0 end) somerow_d,
sum(case when somerow = 'e' then 1 else 0 end) somerow_e,
sum(case when somerow = 'f' then 1 else 0 end) somerow_f,
sum(case when somerow = 'g' then 1 else 0 end) somerow_g,
sum(case when somerow = 'h' then 1 else 0 end) somerow_h,
sum(case when somerow = 'i' then 1 else 0 end) somerow_i,
sum(case when somerow = 'j' then 1 else 0 end) somerow_j,
sum(case when somerow = 'k' then 1 else 0 end) somerow_k
from mytable
group by id
order by id
this works, but it is 'static' - if some new value will be added to 'somerow' I will have to change sql manually to get all the values from somerow column, and that is why I'm wondering if it is possible to do something with for loop.
So what I want to get is this:
id somerow_a somerow_b ....
0 3 2 ....
1 2 10 ....
2 19 3 ....
. ... ...
. ... ...
. ... ...
So what I'd like to do is to count all the rows which has some specific letter in it and group it by id (this id isn't primary key, but it is repeating - for id there are about 80 different values possible).
http://sqlfiddle.com/#!15/18feb/2
Are arrays good for you? (SQL Fiddle)
select
id,
sum(totalcol) as total,
array_agg(somecol) as somecol,
array_agg(totalcol) as totalcol
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
;
id | total | somecol | totalcol
----+-------+---------+----------
1 | 6 | {b,a,c} | {2,1,3}
2 | 5 | {d,f} | {2,3}
In 9.2 it is possible to have a set of JSON objects (Fiddle)
select row_to_json(s)
from (
select
id,
sum(totalcol) as total,
array_agg(somecol) as somecol,
array_agg(totalcol) as totalcol
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
) s
;
row_to_json
---------------------------------------------------------------
{"id":1,"total":6,"somecol":["b","a","c"],"totalcol":[2,1,3]}
{"id":2,"total":5,"somecol":["d","f"],"totalcol":[2,3]}
In 9.3, with the addition of lateral, a single object (Fiddle)
select to_json(format('{%s}', (string_agg(j, ','))))
from (
select format('%s:%s', to_json(id), to_json(c)) as j
from
(
select
id,
sum(totalcol) as total_sum,
array_agg(somecol) as somecol_array,
array_agg(totalcol) as totalcol_array
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
) s
cross join lateral
(
select
total_sum as total,
somecol_array as somecol,
totalcol_array as totalcol
) c
) s
;
to_json
---------------------------------------------------------------------------------------------------------------------------------------
"{1:{\"total\":6,\"somecol\":[\"b\",\"a\",\"c\"],\"totalcol\":[2,1,3]},2:{\"total\":5,\"somecol\":[\"d\",\"f\"],\"totalcol\":[2,3]}}"
In 9.2 it is also possible to have a single object in a more convoluted way using subqueries in instead of lateral
SQL is very rigid about the return type. It demands to know what to return beforehand.
For a completely dynamic number of resulting values, you can only use arrays like #Clodoaldo posted. Effectively a static return type, you do not get individual columns for each value.
If you know the number of columns at call time ("semi-dynamic"), you can create a function taking (and returning) polymorphic parameters. Closely related answer with lots of details:
Dynamic alternative to pivot with CASE and GROUP BY
(You also find a related answer with arrays from #Clodoaldo there.)
Your remaining option is to use two round-trips to the server. The first to determine the the actual query with the actual return type. The second to execute the query based on the first call.
Else, you have to go with a static query. While doing that, I see two nicer options for what you have right now:
1. Simpler expression
select id
, count(*) AS total
, count(somecol = 'a' OR NULL) AS somerow_a
, count(somecol = 'b' OR NULL) AS somerow_b
, ...
from mytable
group by id
order by id;
How does it work?
Compute percents from SUM() in the same SELECT sql query
SQL Fiddle.
2. crosstab()
crosstab() is more complex at first, but written in C, optimized for the task and shorter for long lists. You need the additional module tablefunc installed. Read the basics here if you are not familiar:
PostgreSQL Crosstab Query
SELECT * FROM crosstab(
$$
SELECT id
, count(*) OVER (PARTITION BY id)::int AS total
, somecol
, count(*)::int AS ct -- casting to int, don't think you need bigint?
FROM mytable
GROUP BY 1,3
ORDER BY 1,3
$$
,
$$SELECT unnest('{a,b,c,d}'::text[])$$
) AS f (id int, total int, a int, b int, c int, d int);
SQL query question
I have a query like
select proposal_id, service_id,account_type
from table1
The result is like this:
proposal_id service_id account_type
1 1001 INTERVAL
1 1002 INTERVAL
2 1003 NON INTERVAL
2 1004 NON INTERVAL
3 1005 NON INTERVAL
3 1006 INTERVAL
I want to write a query: for each proposal_id, if all the service have INTERVAL then get 'INTERVAL', if all NON-INTERVAL get 'NON-INTERVAL', if both, get 'Both'
For the example above, it should return
proposal_id account_type
1 INTERVAL
2 NON-INTERVAL
3 BOTH
Data:
declare #table table (id int, sid int, acc nvarchar(20))
insert #table VALUES (1,1001,'INTERVAL'),(1,1002,'INTERVAL'),(2,1003,'NON INTERVAL'),(2,1004,'NON INTERVAL'),
(3,1005,'NON INTERVAL'),(3,1006,'INTERVAL')
Query:
select x.Id
, CASE counter
WHEN 1 THEN x.Account_Type
ELSE 'BOTH'
END AS Account_Type
from (
select Id, Count(DISTINCT(acc)) AS counter, MAX(acc) As Account_Type
from #table
GROUP BY Id
) x
Results
Id Account_Type
----------- --------------------
1 INTERVAL
2 NON INTERVAL
3 BOTH
SELECT
b.proposal_id
,CASE
WHEN s1.proposal_id IS NOT NULL AND s2.proposal_id IS NOT NULL THEN 'BOTH'
WHEN s1.proposal_id IS NOT NULL THEN 'INTERVAL'
WHEN s2.proposal_id IS NOT NULL THEN 'NON-INTERVAL'
ELSE 'UNKNOWN'
END [account_type]
FROM table1 b
LEFT JOIN(
SELECT proposal_id,account_type FROM table1 WHERE account_type = 'INTERVAL'
) s1
ON b.proposal_id = s1.proposal_id
LEFT JOIN (
SELECT proposal_id,account_type FROM table1 WHERE account_type = 'NON-INTERVAL'
)s2
ON b.proposal_id = s2.proposal_id
You could use count distinct to determinate if it is both then use CASE to determinate what to display
SELECT DISTINCT proposal.proposal_id,
CASE cou
WHEN 1 THEN type ELSE 'Both' END as TYPE
FROM proposal
INNER JOIN (SELECT proposal_id, count(distinct type) cou
FROM proposal GROUP BY proposal_id) inn
ON proposal.id = inn.id
select proposal_id,
case when count(distinct account_type) > 1 then 'BOTH'
else max(account_type)
end
from table1
group by proposal_id
You have the fiddler here.
I'm using Oracle 10g. I have a table with a number of fields of varying types. The fields contain observations that have been made by made about a particular thing on a particular date by a particular site.
So:
ItemID, Date, Observation1, Observation2, Observation3...
There are about 40 Observations in each record. The table structure cannot be changed at this point in time.
Unfortunately not all the Observations have been populated (either accidentally or because the site is incapable of making that recording). I need to combine all the records about a particular item into a single record in a query, making it as complete as possible.
A simple way to do this would be something like
SELECT
ItemID,
MAX(Date),
MAX(Observation1),
MAX(Observation2)
etc.
FROM
Table
GROUP BY
ItemID
But ideally I would like it to pick the most recent observation available, not the max/min value. I could do this by writing sub queries in the form
SELECT
ItemID,
ObservationX,
ROW_NUMBER() OVER (PARTITION BY ItemID ORDER BY Date DESC) ROWNUMBER
FROM
Table
WHERE
ObservationX IS NOT NULL
And joining all the ROWNUMBER 1s together for an ItemID but because of the number of fields this would require 40 subqueries.
My question is whether there's a more concise way of doing this that I'm missing.
Create the table and the sample date
SQL> create table observation(
2 item_id number,
3 dt date,
4 val1 number,
5 val2 number );
Table created.
SQL> insert into observation values( 1, date '2011-12-01', 1, null );
1 row created.
SQL> insert into observation values( 1, date '2011-12-02', null, 2 );
1 row created.
SQL> insert into observation values( 1, date '2011-12-03', 3, null );
1 row created.
SQL> insert into observation values( 2, date '2011-12-01', 4, null );
1 row created.
SQL> insert into observation values( 2, date '2011-12-02', 5, 6 );
1 row created.
And then use the KEEP clause on the MAX aggregate function with an ORDER BY that puts the rows with NULL observations at the end. whatever date you use in the ORDER BY needs to be earlier than the earliest real observation in the table.
SQL> ed
Wrote file afiedt.buf
1 select item_id,
2 max(val1) keep( dense_rank last
3 order by (case when val1 is not null
4 then dt
5 else date '1900-01-01'
6 end) ) val1,
7 max(val2) keep( dense_rank last
8 order by (case when val2 is not null
9 then dt
10 else date '1900-01-01'
11 end) ) val2
12 from observation
13* group by item_id
SQL> /
ITEM_ID VAL1 VAL2
---------- ---------- ----------
1 3 2
2 5 6
I suspect that there is a more elegant solution to ignore the NULL values than adding the CASE statement to the ORDER BY but the CASE gets the job done.
i dont know about commands in oracle but in sql you could use some how that
first use pivot table is contains consecutives numbers 0,1,2...
i'm not sure but in oracle the function "isnull" is "NVL"
select items.ItemId,
case p.i = 0 then observation1 else '' end as observation1,
case p.i = 0 then observation1 else '' end as observation2,
case p.i = 0 then observation1 else '' end as observation3,
...
case p.i = 39 then observation4 else '' as observation40
from (
select items.ItemId
from table as items
where items.item = _paramerter_for_retrive_only_one_item /* select one item o more item where you filter items here*/
group by items.ItemId) itemgroup
left join
(
select
items.ItemId,
p.i,
isnull( max ( case p.i = 0 then observation1 else '' end ), '' ) as observation1,
isnull( max ( case p.i = 1 then observation2 else '' end ), '' ) as observation2,
isnull( max ( case p.i = 2 then observation3 else '' end), '' ) as observation3,
...
isnull( max ( case p.i = 39 then observation4), '' ) as observation40,
from
(select i from pivot where id < 40 /*you number of columns of observations, that attach one index*/
)
as p
cross join table as items
lef join table as itemcombinations
on item.itemid = itemcombinations.itemid
where items.item = _paramerter_for_retrive_only_one_item /* select one item o more item where you filter items here*/
and (p.i = 0 and not itemcombinations.observation1 is null) /* column 1 */
and (p.i = 1 and not itemcombinations.observation2 is null) /* column 2 */
and (p.i = 2 and not itemcombinations.observation3 is null) /* column 3 */
....
and (p.i = 39 and not itemcombinations.observation3 is null) /* column 39 */
group by p.i, items.ItemId
) as itemsimplified
on itemsimplified.ItemId = itemgroup.itemId
group by itemgroup.itemId
About pivot table
create an pivot table, Take a look at that
pivot table schema
name: pivot columns: {i : datatype int}
How populate
create foo table
schema foo
name: foo column: value datatype varchar
insert into foo
values('0'),
values('1'),
values('2'),
values('3'),
values('4'),
values('5'),
values('6'),
values('7'),
values('8'),
values('9');
/* insert 100 values */
insert into pivot
select concat(a.value, a.value) /* mysql */
a.value + a.value /* sql server */
a.value | a.value /* Oracle im not sure about that sintax */
from foo a, foo b
/* insert 1000 values */
insert into pivot
select concat(a.value, b.value, c.value) /* mysql */
a.value + b.value + c.value /* sql server */
a.value | b.value | c.value /* Oracle im not sure about that sintax */
from foo a, foo b, foo c
the idea about pivot table can consult in "Transact-SQL Cookbook By Jonathan Gennick, Ales Spetic"
I have to admit that the above solution (by Justin Cave) is simpler and easier to understand but this is another good option
at the end like you said you solved