SQL query to find an output table - sql

I have three dimension tables and a fact table and i need to write a query in such way that i join all the dimension columns with fact table to find out top 10 ATMs where most transactions are in the ’inactive’ state.I try the query with cartesian join but i dont know if this is the right way to join the tables.
select a.atm_number,a.atm_manufacturer,b.location,count(c.trans_id) as total_transaction_count,count(c.atm_status) as inactive_count
from dimen_atm a,dimen_location b,fact_atm_trans c
where a.atm_id = c.atm_id and b.location = c.location
order by inactive_count desc limit 10;
dimen_card_type
+------------+---------+
|card_type_id|card_type|
+------------+---------+
| 1| CIRRUS|
| 2| Dankort|
dimen_atm
+------+----------+----------------+---------------+
|atm_id|atm_number|atm_manufacturer|atm_location_id|
+------+----------+----------------+---------------+
| 1| 1| NCR| 16|
| 2| 2| NCR| 64|
+------+----------+----------------+---------------+
dimen_location
+-----------+--------------------+----------------+-------------+-------+------+------+
|location_id| location| streetname|street_number|zipcode| lat| lon|
+-----------+--------------------+----------------+-------------+-------+------+------+
| 1|Intern København|Rådhuspladsen| 75| 1550|55.676|12.571|
| 2| København| Regnbuepladsen| 5| 1550|55.676|12.571|
+-----------+--------------------+----------------+-------------+-------+------+------+
fact_atm_trans
+--------+------+--------------+-------+------------+----------+--------+----------+------------------+------------+------------+-------+----------+----------+------------+-------------------+
|trans_id|atm_id|weather_loc_id|date_id|card_type_id|atm_status|currency| service|transaction_amount|message_code|message_text|rain_3h|clouds_all|weather_id|weather_main|weather_description|
+--------+------+--------------+-------+------------+----------+--------+----------+------------------+------------+------------+-------+----------+----------+------------+-------------------+
| 1| 1| 16| 5229| 3| Active| DKK|Withdrawal| 5980| null| null| 0.0| 80| 803| Clouds| broken cloudsr|
| 2| 1| 16| 4090| 10| Active| DKK|Withdrawal| 3992| null| null| 0.0| 32| 802| Clouds| scattered cloudsr|
+--------+------+--------------+-------+------------+----------+--------+----------+------------------+------------+-----------

Related

group by on the multiple inner join in postgres

i have 3 tables the first table "A" is the master table
id_grp|group_name |created_on |status|
------+--------------+-----------------------+------+
17|Teller |2022-09-09 16:00:44.842| 1|
18|Combined Group|2022-09-09 10:16:42.473| 1|
16|admnistrator |2022-09-08 10:11:14.313| 1|
Then i have another table table "b"
id_config|id_grp|id_utilis|
---------+------+---------+
159| 16| 1|
161| 16| 54|
164| 17| 55|
438| 17| 88|
166| 18| 39|
167| 18| 20|
439| 16| 89|
198| 18| 51|
Then i have the last table "C"
id_config|id_grp|id_pol|
---------+------+------+
46| 16| 7|
48| 17| 8|
51| 18| 8|
52| 18| 7|
84| 18| 9|
113| 17| 9|
but when i using group by with multiple join as follows
SELECT
a.id_grp,
a.group_name,
a.created_on,
a.status,
count(b.id_utilis) AS users,
count(c.id_pol) AS policy
FROM a
inner JOIN b on a.id_grp = b.id_grp
inner JOIN c on a.id_grp = c.id_grp
GROUP BY a.id_grp, a.group_name, a.created_on, a.status,
but i am getting wront result there both the count are creating matrix and multiplying each other
id_grp|group_name |created_on |status|users|policy|
------+--------------+-----------------------+------+-----+------+
17|Teller |2022-09-09 16:00:44.842| 1| 10| 10|
16|admnistrator |2022-09-08 10:11:14.313| 1| 3| 3|
18|Combined Group|2022-09-09 10:16:42.473| 1| 18| 18|
select *
from a
join (select id_grp, count(*) as users from b group by id_grp) b using(id_grp)
join (select id_grp, count(*) as policy from c group by id_grp) c using(id_grp)
id_grp
group_name
created_on
status
users
policy
17
Teller
2022-09-09 16:00:44
1
2
2
18
Combined Group
2022-09-09 10:16:42
1
3
3
16
admnistrator
2022-09-08 10:11:14
1
3
1
Fiddle

Joining tables and finding difference

I have a table which contains the following schema:
Table1
+------------------+--------------------+-------------------+-------------+-------------+
|student_id|project_id|name|project_name|approved|evaluation_type|grade| cohort_number|
I have another table with the following:
Table2
+-------------+----------+
|cohort_number|project_id|
My problem is: I want to get for each student_id the projects that he has not completed (no rows). The way i know all the projects he should have done is by checking the cohort_number. Basically I need the "diference" between the 2 tables. I want to fill table 1 with the missing entries, by comparing with table 2 project_id for that cohort_number.
I am not sure if I was clear.
I tried using LEFT JOIN, but I only get records where it matches. (I need the opposite)
Example:
Table1
|student_id|project_id|name| project_name| approved|evaluation_type| grade|cohort_number|
+----------+----------+--------------------+------+--------------------+--------+---------------+------------------
| 13| 18|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 7|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 27|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
Table2
+-------------+----------+
|cohort_number|project_id|
+-------------+----------+
| 3| 18|
| 3| 27|
| 4| 15|
| 3| 7|
| 3| 35|
I want:
|student_id|project_id|name| project_name| approved|evaluation_type| grade|cohort_number|
+----------+----------+--------------------+------+--------------------+--------+---------------+------------------
| 13| 18|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 7|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 27|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 35|Name| project/sd-03-bloc...| false| standard| 0| 3|
Thanks in advance
If I followed you correctly, you can get all distinct (student_id, cohort_number, name) tuples from table1, and then bring all corresponding rows from table2. This basically gives you one row for each project that a student should have completed.
You can then bring table1 with a left join. "Missing" projects are identified by null values in columns project_name, approved, evaluation_type, grade.
select
s.student_id,
t2.project_id,
s.name,
t1.project_name,
t1.approved,
t1.evaluation_type,
t1.grade,
s.cohort_number
from (select distinct student_id, cohort_number, name from table1) s
inner join table2 t2
on t2.cohort_number = s.cohort_number
left join table1 t1
on t1.student_id = s.student_id
and t1.project_id = t.project_id

Add aggregated columns to pivot without join

Considering the table:
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df.show()
+---+-----+---------+
| id|error|timestamp|
+---+-----+---------+
| 1| 1| 1|
| 5| 0| 2|
| 27| 1| 1|
| 1| 0| 3|
| 5| 1| 1|
| 1| 0| 2|
+---+-----+---------+
I would like to make a pivot on timestamp column keeping some other aggregated information from the original table. The result I am interested in can be achieved by
df1=df.groupBy('id').agg(sf.sum('error').alias('Ne'),sf.count('*').alias('cnt'))
df2=df.groupBy('id').pivot('timestamp').agg(sf.count('*')).fillna(0)
df1.join(df2, on='id').filter(sf.col('cnt')>1).show()
with the resulting table:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
However, there are at least two issues with the mentioned solution:
I am filtering by cnt at the end of the script. If I would be able to do this at the beginning, I can avoid almost all processing, because a large portion of data is removed using this filtration. Is there any way how to do this excepting collect and isin methods?
I am doing groupBy on id two-times. First, to aggregate some columns I need in results and the second time to get the pivot columns. Finally, I need join to merge these columns. I feel that I am surely missing some solution because it should be possible to do this with just one groubBy and without join, but I cannot figure out, how to do this.
I think you can not get around the join, because the pivot will need the timestamp values and the first grouping should not consider them. So if you have to create the NE and cnt values you have to group the dataframe only by id which results in the loss of timestamp if you want to preserve the values in columns you have to do the pivot as you did separately and join it back.
The only improvement that can be done is to move the filter to the df1 creation. So as you said this could already improve the performance since df1 should be much smaller after the filtering for your real data.
from pyspark.sql.functions import *
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df1=df.groupBy('id').agg(sum('error').alias('Ne'),count('*').alias('cnt')).filter(col('cnt')>1)
df2=df.groupBy('id').pivot('timestamp').agg(count('*')).fillna(0)
df1.join(df2, on='id').show()
Output:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
Actually it is indeed possible to avoid join using Window as
w1 = Window.partitionBy('id')
w2 = Window.partitionBy('id', 'timestamp')
df.select('id', 'timestamp',
sf.sum('error').over(w1).alias('Ne'),
sf.count('*').over(w1).alias('cnt'),
sf.count('*').over(w2).alias('cnt_2')
).filter(sf.col('cnt')>1) \
.groupBy('id', 'Ne', 'cnt').pivot('timestamp').agg(sf.first('cnt_2')).fillna(0).show()

why sql code work and scala code like sql don't work? (use left join and several date)

I have sql code which works perfectly:
val sql ="""
select a.*,
b.fOOS,
b.prevD
from dataFrame as a
left join dataNoPromoFOOS as b on
a.shopId = b.shopId and a.skuId = b.skuId and
a.Date > b.date and a.date <= b.prevD
"""
result:
+------+------+----------+-----+-----+------------------+---+----------+------------------+----------+
|shopId| skuId| date|stock|sales| salesRub| st|totalPromo| fOOS| prevD|
+------+------+----------+-----+-----+------------------+---+----------+------------------+----------+
| 200|154057|2017-03-31|101.0| 49.0| 629.66| 1| 0|58.618803952304724|2017-03-31|
| 200|154057|2017-09-11|116.0| 76.0| 970.67| 1| 0| 63.3344597217295|2017-09-11|
| 200|154057|2017-11-10| 72.0| 94.0| 982.4599999999999| 1| 0|59.019226118850405|2017-11-10|
| 200|154057|2018-10-08|126.0| 34.0| 414.44| 1| 0| 55.16878756270067|2018-10-08|
| 200|154057|2016-08-03|210.0| 27.0| 307.43| 1| 0|23.530049844711286|2016-08-03|
| 200|154057|2016-09-03| 47.0| 20.0| 246.23| 1| 0|24.656378380329674|2016-09-03|
| 200|154057|2016-12-31| 66.0| 30.0| 386.5| 1| 1| 26.0423103074891|2017-01-09|
| 200|154057|2017-02-28| 22.0| 61.0| 743.2899999999998| 1| 0| 54.86808157636879|2017-02-28|
| 200|154057|2017-03-16| 79.0| 41.0|505.40999999999997| 1| 0| 49.79449369431623|2017-03-16|
when i use scala this code don't work
dataFrame.join(dataNoPromoFOOS,
dataFrame("shopId") === dataNoPromoFOOS("shopId") &&
dataFrame("skuId") === dataNoPromoFOOS("skuId") &&
(dataFrame("date").lt(dataNoPromoFOOS("date"))) &&
(dataFrame("date").geq(dataNoPromoFOOS("prevD"))) ,
"left"
).select(dataFrame("*"),dataNoPromoFOOS("fOOS"),dataNoPromoFOOS("prevD"))
result:
+------+------+----------+-----+-----+------------------+---+----------+----+-----+
|shopId| skuId| date|stock|sales| salesRub| st|totalPromo|fOOS|prevD|
+------+------+----------+-----+-----+------------------+---+----------+----+-----+
| 200|154057|2016-09-24|288.0| 34.0| 398.66| 1| 0|null| null|
| 200|154057|2017-06-11| 40.0| 38.0| 455.32| 1| 1|null| null|
| 200|154057|2017-08-18| 83.0| 20.0|226.92000000000002| 1| 1|null| null|
| 200|154057|2018-07-19|849.0| 58.0| 713.12| 1| 0|null| null|
| 200|154057|2018-08-11|203.0| 52.0| 625.74| 1| 0|null| null|
| 200|154057|2016-09-01|120.0| 24.0| 300.0| 1| 1|null| null|
| 200|154057|2016-12-22| 62.0| 30.0| 378.54| 1| 0|null| null|
| 200|154057|2017-05-11|105.0| 49.0| 597.12| 1| 0|null| null|
| 200|154057|2016-12-28| 3.0| 36.0| 433.11| 1| 1|null| null|
somebody know why sql code work and scala code don't join left table.
i think it's the date column, but i don't undestand how i can find my error

SQL Join returns duplicate entries

Just going to start out saying that I am new to SQL and what I've written is based off of tutorials (Also I am using SQL Server 2012). The issue I am having is I am trying to take data from 4 different tables and put them into 1 table to be accessed by Access. However I keep getting duplicate results if a value is different from the rest.
The tables look like
Cell1
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 1|
Cell2
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 1|
Cell3
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 1|
Cell4
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 0|
My code is
Alter Procedure [dbo].[spSingleData](
#LotNum varchar(50)
)
AS
Truncate Table dbo.SingleSheet
Begin
Insert INTO dbo.SingleSheet (SerialNum, Cell1PF, Cell2Pf, Cell3PF, Cell4PF)
Select Distinct Cell1.SerialNum, Cell1.PF, Cell2.PF, Cell3.PF, Cell4.PF
From dbo.Cell1
Left Join Cell2 On Cell1.LotNum=Cell2.LotNum
Left Join Cell3 On Cell1.LotNum=Cell3.LotNum
Left Join Cell4 On Cell1.LotNum=Cell4.LotNum
Where Cell1.LotNum = #LotNum
Order by SerialNum
End
PassFail can be 0, 1, or NULL, however, like in the example above, if one of the PassFails is different from the rest, the resulting table returns
|1234| 1| 1| 1| 0|
|1234| 1| 1| 1| 1|
|2345| 1| 1| 1| 0|
|2345| 1| 1| 1| 1|
|3456| 1| 1| 1| 0|
|3456| 1| 1| 1| 1|
|4567| 1| 1| 1| 0|
|4567| 1| 1| 1| 1|
Am I just using the wrong Join or should I be using something else?
Is this what you are trying to achieve:
If so then you are missing a JOIN predicate on SerialNum and you do not need the DISTINCT
Sample Data:
IF OBJECT_ID('tempdb..#Cell1') IS NOT NULL
DROP TABLE #Cell1
CREATE TABLE #Cell1 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell1
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,1)
IF OBJECT_ID('tempdb..#Cell2') IS NOT NULL
DROP TABLE #Cell2
CREATE TABLE #Cell2 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell2
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,1)
IF OBJECT_ID('tempdb..#Cell3') IS NOT NULL
DROP TABLE #Cell3
CREATE TABLE #Cell3 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell3
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,1)
IF OBJECT_ID('tempdb..#Cell4') IS NOT NULL
DROP TABLE #Cell4
CREATE TABLE #Cell4 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell4
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,0)
Query:
SELECT #Cell1.SerialNum,
#Cell1.PassFail,
#Cell2.PassFail,
#Cell3.PassFail,
#Cell4.PassFail
FROM #Cell1
LEFT JOIN #Cell2 ON #Cell1.LotNum = #Cell2.LotNum AND #Cell1.SerialNum = #Cell2.SerialNum
LEFT JOIN #Cell3 ON #Cell1.LotNum = #Cell3.LotNum AND #Cell1.SerialNum = #Cell3.SerialNum
LEFT JOIN #Cell4 ON #Cell1.LotNum = #Cell4.LotNum AND #Cell1.SerialNum = #Cell4.SerialNum
ORDER BY SerialNum;
Results: