How to combine four select queries into one? - sql

I have four different select queries.
Select A,Round(B) as P,Round(C) as Q,Round(D) as R,Round(E) as S from tb_name1 a Inner Join tb_name2 b on (a.X1 =b.X2 and a.T_KEY=b.T_KEY) where a.X3="something" and a.X4="xyz" and b.X5="1243" GROUP BY A ORDER BY A DESC
Select A,Round(F) as T from tb_name4 a Join tb_name5 b on (a.K1 = b.K2 and a.K3 and b.K4 ) where a.X6="something" and a.X7="xyz1" and b.X8="1233" GROUP BY A ORDER BY A DESC
Select A,Round(G) as Q from tb_name6 a Join tb_name7 b on (a.K5 = b.K6 and a.K7 and b.K8 ) where a.X9="something" and a.X10="xyz2" and b.X11="123" GROUP BY A ORDER BY A DESC
Select A,Round(H) as R from tb_name8 a Join tb_name9 b on (a.K9 = b.K10 and a.K11 and b.K12 ) where a.X12="something" and a.X13="xyz3" and b.X14="1123" GROUP BY A ORDER BY A DESC
I have tried on Union but It's not working.I want one output using four queries and the values should be displayed as one after one values like below...
Output:--
Column's Name Column1 Column2 Column3 Column4 Column5 Column6 Column7
Row 1 valu1 valu2 valu3 valu4 valu5 valu6 valu7
Row 2 valu8 valu9 valu10 valu11 valu12 valu13 valu14
Row 3 valu15 valu16 valu17 valu18 valu19 valu20 valu21

Related

Mutli column and mutl table inner join

I need to perform inner-join on tables with two common columns org_id and time_stamp on data in avro format in S3 queried through Athena
I have tried
SELECT year(from_iso8601_timestamp(em.time_stamp)) time_unit,
sum(em.column1) column1,
sum(spa.column2) column2,
sum(vir.column3) column3
FROM "schemaName".table1 em
JOIN "schemaName".table2 spa
ON year(from_iso8601_timestamp(em.time_stamp)) = year(from_iso8601_timestamp(spa.time_stamp))
AND em.org_id = spa.org_id
JOIN "schemaName".table3 vir
ON year(from_iso8601_timestamp(vir.time_stamp)) = year(from_iso8601_timestamp(spa.time_stamp))
AND vir.org_id = spa.org_id
WHERE em.org_id = 'org_id_test'
AND (from_iso8601_timestamp(em.time_stamp)) <= (cast(from_iso8601_timestamp('2019-11-22T23:59:31') AS timestamp))
AND (from_iso8601_timestamp(em.time_stamp)) >= (cast(from_iso8601_timestamp('2019-11-22T23:59:31') AS timestamp) - interval '10' year)
GROUP BY em.org_id, year(from_iso8601_timestamp(em.time_stamp))
ORDER BY time_unit DESC limit 11
But what I am getting is kind of looking as cross-join
results
time_unit |column1 |column2 |column3
1 2019 |48384 |299040 |712
while if I aggregate on each table separately with same where conditions, then values appear as
table1
column1
504
table2
column2
280
table3
column3
5
can somebody help me figure out what I am doing wrong and right way to achieve it?
If I followed you correctly, what is happening is that, since there are multiple records matching the conditions in each join, you end up the same record being counted multiple time when you aggregate.
A typical way around this is to aggregate in subqueries, and then join.
Something like this might be what you are looking for:
select
em.time_unit,
em.column1,
spa.column2,
vir.column3
from (
select
org_id,
year(from_iso8601_timestamp(time_stamp)) time_unit,
sum(column1) column1
from "schemaname".table1
group by org_id, year(from_iso8601_timestamp(time_stamp))
) em
join (
select
org_id,
year(from_iso8601_timestamp(time_stamp)) time_unit,
sum(column2) column2
from "schemaname".table2
group by org_id, year(from_iso8601_timestamp(time_stamp))
) spa on spa.time_unit = em.time_unit and spa.org_id = em.org_id
join (
select
org_id,
year(from_iso8601_timestamp(time_stamp)) time_unit,
sum(column3) column3
from "schemaname".table3
group by org_id, year(from_iso8601_timestamp(time_stamp))
) vir on vir.time_unit = em.time_unit and vir.org_id = em.org_id
where
em.org_id = 'org_id_test'
and em.time_unit between 2009 and 2019
order by em.time_unit desc
limit 11

Getting Number of Common Values from 2 comma-seperated strings

I have a table that contains comma-separated values in a column In Postgres.
ID PRODS
--------------------------------------
1 ,142,10,75,
2 ,142,87,63,
3 ,75,73,2,58,
4 ,142,2,
Now I want a query where I can give a comma-separated string and it will tell me the number of matches between the input string and the string present in the row.
For instance, for input value ',142,87,', I want the output like
ID PRODS No. of Match
------------------------------------------------------------------------
1 ,142,10,75, 1
2 ,142,87,63, 2
3 ,75,73,2,58, 0
4 ,142,2, 1
Try this:
SELECT
*,
ARRAY(
SELECT
*
FROM
unnest(string_to_array(trim(both ',' from prods), ','))
WHERE
unnest = ANY(string_to_array(',142,87,', ','))
)
FROM
prods_table;
Output is:
1 ,142,10,75, {142}
2 ,142,87,63, {142,87}
3 ,75,73,2,58, {}
4 ,142,2, {142}
Add the cardinality(anyarray) function to the last column to get just a number of matches.
And consider changing your database design.
Check This.
select T.*,
COALESCE(No_of_Match,'0')
from TT T Left join
(
select ID,count(ID) No_of_Match
from (
select ID,unnest(string_to_array(trim(t.prods, ','), ',')) A
from TT t)a
Where A in ('142','87')
group by ID
)B
On T.Id=b.id
Demo Here
OutPut
If you install the intarray extension, this gets quite easy:
select id, prods, cardinality(string_to_array(trim(prods, ','), ',')::int[] & array[142,87])
from bad_design;
Otherwise it's a bit more complicated:
select bd.id, bd.prods, m.matches
from bad_design bd
join lateral (
select bd.id, count(v.p) as matches
from unnest(string_to_array(trim(bd.prods, ','), ',')) as l(p)
left join (
values ('142'),('87') --<< these are your input values
) v(p) on l.p = v.p
group by bd.id
) m on m.id = bd.id
order by bd.id;
Online example: http://rextester.com/ZIYS97736
But you should really fix your data model.
with data as
(
select *,
unnest(string_to_array(trim(both ',' from prods), ',') ) as v
from myTable
),
counts as
(
select id, count(t) as c from data
left join
( select unnest(string_to_array(',142,87,', ',') ) as t) tmp on tmp.t = data.v
group by id
order by id
)
select t1.id, t1.prods, t2.c as "No. of Match"
from myTable t1
inner join counts t2 on t1.id = t2.id;

How to Select * Where Everything is Distinct Except One Field

I'm trying to pull 6 records using the code below but there are some cases where the information is updated and therefore it is pulling duplicate records.
My code:
SELECT column2, count(*) as 'Count'
FROM ServiceTable p
join HIERARCHY h
on p.LOCATION_CODE = h.LOCATION
where Report_date between '2017-04-01' and '2017-04-30'
and Column1 = 'Issue '
and LOCATION = '8789'
and
( record_code = 'INCIDENT' or
(
SUBMIT_METHOD = 'Web' and
not exists
(
select *
from ServiceTable p2
where p2.record_code = 'INCIDENT'
and p2.incident_id = p.incident_id
)
)
)
The problem is that instead of the six records it is pulling eight. I would just use distinct * but the file_date is different on the duplicate entries:
FILE_DATE Incident_ID Column1 Column2
4/4/17 123 Issue Service - Red
4/4/17 123 Issue Service - Blue
4/5/17 123 Issue Service - Red
4/5/17 123 Issue Service - Blue
The desired output is:
COLUMN2 COUNT
Service - Red 1
Service - Blue 1
Any help would be greatly appreciated! If you need any other info just let me know.
If you turn your original select statement without the aggregation function into a subquery, you can distinct that on your values that are not the changing date, then select a COUNT from there. Don't forget your GROUP BY clause at the end.
SELECT Column2, COUNT(Incident_ID) AS Service_Count
FROM (SELECT DISTINCT Incident_ID, Column1, Column2
FROM ServiceTable p
JOIN HIERARCHY h ON p.LOCATION_CODE = h.LOCATION
WHERE Report_date BETWEEN '2017-04-01' AND '2017-04-30'
AND Column1 = 'Issue '
AND LOCATION = '8789'
AND
( record_code = 'INCIDENT' or
(
SUBMIT_METHOD = 'Web' and
NOT EXISTS
(
SELECT *
FROM ServiceTable p2
WHERE p2.record_code = 'INCIDENT'
AND p2.incident_id = p.incident_id)
)
)
)
GROUP BY Column2
Also, if you are joining tables it is a good practice to fully qualify the field you are selecting. Example: p.Column2, p.Incident_ID, h.LOCATION. That way, even your distinct fields are easier to follow where they came from and how they relate.
Finally, don't forget that COUNT is a reserved word. I modified your alias accordingly.
If you are using an aggregation function (count), you should use group by for the column not in the aggregation function:
SELECT column2, count(*) as 'Count'
FROM ServiceTable p
join HIERARCHY h
on p.LOCATION_CODE = h.LOCATION
where Report_date between '2017-04-01' and '2017-04-30'
and Column1 = 'Issue '
and LOCATION = '8789'
and
( record_code = 'INCIDENT' or
(
SUBMIT_METHOD = 'Web' and
not exists
(
select *
from ServiceTable p2
where p2.record_code = 'INCIDENT'
and p2.incident_id = p.incident_id
)
)
)
group by column2

Putting unique results of SELECT rows in one row

I have a query returning the results I have but I am not sure how to approach changing it to a convetion that my program uses to send data:
SELECT
[contract_member_brg_attr].attr_val AS 'field_properties',
[contract_attr].attr_val AS 'contract_number',
[other_contract_attr].attr_val AS 'supplier_number',
[MFR].ITEM_NAME AS 'supplier_name'
FROM [contract_member_brg_attr]
INNER JOIN [contract_member_brg]
ON [contract_member_brg_attr].item_id =
[contract_member_brg].item_id
INNER JOIN [contract_attr]
ON [contract_attr].item_id =
[contract_member_brg].[contract_item_id]
AND [contract_attr].field_id = 413
INNER JOIN [contract_attr] AS [other_contract_attr]
ON [other_contract_attr].item_id =
[contract_member_brg].[contract_item_id]
AND [other_contract_attr].field_id = 234
INNER JOIN [MFR] as [MFR]
ON [MFR].ITEM_PK =
[other_contract_attr].attr_val;
Results:
My issue is that I want all unique values from these results to be on 1 row. SO in this case, it would be all of the field_properties and one of each contract_number, supplier_number, supplier_name.
How would I do this what approaches are available?
EDIT:
THis is how I would want it to look:
all on one row:
column1= 388
column2 = FEB 22 2017
column3 = FEB 22 2017
column4 = test 2
column5 = test 3
column6 = true
column7 = b5v5b5b5bb5
column8 = A180
column9 = ABBOTT NUTRITION
Please look to my question about pivot. It has few successfull answers:
How to apply pivot to result of query
SELECT *
FROM (
SELECT
'id',
'field_properties',
'contract_number',
'supplier_number',
'supplier_name'
FROM (
SELECT
row_number() over (ORDER BY [contract_member_brg_attr].id) AS 'id' \\should be some analog for your DB
[contract_member_brg_attr].attr_val AS 'field_properties' ...
\\your original query
)
)
pivot (
MIN('field_properties') \\any agregation function
FOR ID IN (1, 2, 3, 4, 5, 6)
) pvt

Calculate multiple columns with each other using CTE

I want to build columns that calculated with each other. (Excuse my English)
Example:
Id Column1 Column2 Column3
1 5 5 => Same as Column1 5 => Same as Column2
2 2 12 => column1 current + column2.prev + column3.previous = 2+5+5 17 => column2.current + column3.prev = 12+5
3 3 32 => 3+12+17 49 => 32+17
easier way to see:
Id Column1 Column2 Column3
1 5 5 => Same as Column1 5 => Same as Column2
2 2 12 => 2+5+5 17 => 12+5
3 3 32 => 3+12+17 49 => 32+17
so complicated??? :-(
The previous issue was calculating Column3 with the new calculated column as Column2. But now, it must be renew with the just calculated Column2 and the previous record of Column3 as well. If you want to have a look at the previous post, here it is.
Here is my previous recursive CTE code. It works like, 1st, calculate column2 with previous record of current column (c.Column2) in cteCalculation, and then calculate new column3 in cte2 with just calculated column2 from cteCalculation.
/copied from that previous post/
;with cteCalculation as (
select t.Id, t.Column1, t.Column1 as Column2
from table_1 t
where t.Id = 1
union all
select t.Id, t.Column1, (t.Column1 + c.Column2) as Column2
from table_1 t
inner join cteCalculation c
on t.Id-1 = c.id
),
cte2 as(
select t.Id, t.Column1 as Column3
from table_1 t
where t.Id = 1
union all
select t.Id, (select column2+1 from cteCalculation c where c.id = t.id) as Column3
from table_1 t
inner join cte2 c2
on t.Id-1 = c2.id
)
select c.Id, c.Column1, c.Column2, c2.column3
from cteCalculation c
inner join cte2 c2 on c.id = c2. id
Now I wanna extend it like calculate 2 columns with the data from each other. Means, use 2nd to calc the 3rd, and use 3rd to get new 2nd column data. Hope you can get it.
This is an example how to achive this using recursive CTE
create table #tmp (id int identity (1,1), Column1 int)
insert into #tmp values(5)
insert into #tmp values(2)
insert into #tmp values(3);
with counter as
(
SELECT top 1 id, Column1, Column1 as Column2, Column1 as Column3 from #tmp
UNION ALL
SELECT t.id, t.Column1,
t.Column1 + counter.Column2 + counter.Column3,
(t.Column1 + counter.Column2 + counter.Column3) + counter.Column3 FROM counter
INNER JOIN #tmp t ON t.id = counter.id + 1
)
select * from counter
You'll need to use a Recursive CTE since the values of subsequent columns are dependent upon earlier results.
Do this in pieces, too. Have your first query just return the correct values for Column1. Your next (recursive CTE) query will add the results for Column2, and so on.
OK I'm assuming you're doing inserts into column 1 here of various values.
Essentially col2 always = new col1 value + old col2 value + old col 3 value
col3 = new col2 value + old col3 value
so col3 = (new col1 value + old col2 value + old col 3 value) + old col3 value
So an INSTEAD OF Insert trigger is probably the easiest way to implement.
CREATE TRIGGER tr_xxxxx ON Tablename
INSTEAD OF INSERT
AS
INSERT INTO Tablename (Column1, Column2, Column3)
SELECT ins.col1, ins.col1+t.col2+t.col3, ins.col1+t.col2+t.col3+t.col3
FROM Tablename t INNER JOIN Inserted ins on t.Id = ins.Id
The trigger has access to both the existing (old) values in Tablename t, and the new value being inserted (Inserted.col1).