SQL Join returns duplicate entries - sql

Just going to start out saying that I am new to SQL and what I've written is based off of tutorials (Also I am using SQL Server 2012). The issue I am having is I am trying to take data from 4 different tables and put them into 1 table to be accessed by Access. However I keep getting duplicate results if a value is different from the rest.
The tables look like
Cell1
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 1|
Cell2
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 1|
Cell3
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 1|
Cell4
|LotNum|SerialNum|PassFail|
| Lot11| 1234| 1|
| Lot11| 2345| 1|
| Lot11| 3456| 1|
| Lot11| 4567| 0|
My code is
Alter Procedure [dbo].[spSingleData](
#LotNum varchar(50)
)
AS
Truncate Table dbo.SingleSheet
Begin
Insert INTO dbo.SingleSheet (SerialNum, Cell1PF, Cell2Pf, Cell3PF, Cell4PF)
Select Distinct Cell1.SerialNum, Cell1.PF, Cell2.PF, Cell3.PF, Cell4.PF
From dbo.Cell1
Left Join Cell2 On Cell1.LotNum=Cell2.LotNum
Left Join Cell3 On Cell1.LotNum=Cell3.LotNum
Left Join Cell4 On Cell1.LotNum=Cell4.LotNum
Where Cell1.LotNum = #LotNum
Order by SerialNum
End
PassFail can be 0, 1, or NULL, however, like in the example above, if one of the PassFails is different from the rest, the resulting table returns
|1234| 1| 1| 1| 0|
|1234| 1| 1| 1| 1|
|2345| 1| 1| 1| 0|
|2345| 1| 1| 1| 1|
|3456| 1| 1| 1| 0|
|3456| 1| 1| 1| 1|
|4567| 1| 1| 1| 0|
|4567| 1| 1| 1| 1|
Am I just using the wrong Join or should I be using something else?

Is this what you are trying to achieve:
If so then you are missing a JOIN predicate on SerialNum and you do not need the DISTINCT
Sample Data:
IF OBJECT_ID('tempdb..#Cell1') IS NOT NULL
DROP TABLE #Cell1
CREATE TABLE #Cell1 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell1
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,1)
IF OBJECT_ID('tempdb..#Cell2') IS NOT NULL
DROP TABLE #Cell2
CREATE TABLE #Cell2 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell2
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,1)
IF OBJECT_ID('tempdb..#Cell3') IS NOT NULL
DROP TABLE #Cell3
CREATE TABLE #Cell3 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell3
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,1)
IF OBJECT_ID('tempdb..#Cell4') IS NOT NULL
DROP TABLE #Cell4
CREATE TABLE #Cell4 (LotNum varchar(10),SerialNum int,PassFail bit)
INSERT INTO #Cell4
VALUES
('Lot11',1234,1),
('Lot11',2345,1),
('Lot11',3456,1),
('Lot11',4567,0)
Query:
SELECT #Cell1.SerialNum,
#Cell1.PassFail,
#Cell2.PassFail,
#Cell3.PassFail,
#Cell4.PassFail
FROM #Cell1
LEFT JOIN #Cell2 ON #Cell1.LotNum = #Cell2.LotNum AND #Cell1.SerialNum = #Cell2.SerialNum
LEFT JOIN #Cell3 ON #Cell1.LotNum = #Cell3.LotNum AND #Cell1.SerialNum = #Cell3.SerialNum
LEFT JOIN #Cell4 ON #Cell1.LotNum = #Cell4.LotNum AND #Cell1.SerialNum = #Cell4.SerialNum
ORDER BY SerialNum;
Results:

Related

Number of foods that scored "true" in being good, grouped by culture SQL

Okay, so I've been driving myself crazy trying to get this to display in SQL. I have a table that stores types of food, the culture they come from, a score, and a boolean value about whether or not they are good. I want to display a record of how many "goods" each culture racks up. Here's the table (don't ask about the database name):
So I've tried:
SELECT count(good = 1), culture FROM animals_db.foods group by culture;
Or
SELECT count(good = true), culture FROM animals_db.foods group by culture;
But it doesn't present the correct results, it seems to include anything that has any "good" value (1 or 0) at all.
How do I get the data I want?
instead of count , use sum.
SELECT sum(good), culture FROM animals_db.foods group by culture; -- assume good column value have integer data type and good value is represent as 1 otherwise 0
or other way is using count
select count( case when good=1 then 1 end) , culture from animals_db.foods group by culture;
If the purpose is to count the number of good=1 for each culture, this works:
select culture,
count(*)
from foods
where good=1
group by 1
order by 1;
Result:
culture |count(*)|
--------+--------+
| 1|
American| 1|
Chinese | 1|
European| 1|
Italian | 2|
The reason your first query doesn't return the result can be explained as below:
select culture,
good=1 as is_good
from foods
order by 1;
You actually get:
culture |is_good|
--------+-------+
| 1|
American| 0|
American| 1|
Chinese | 1|
European| 1|
French | 0|
French | 0|
German | 0|
Italian | 1|
Italian | 1|
After applied group by culture and count(good=1), you're actually counting the number of NOT NULL values in good=1. For example:
select culture,
count(good=0) as c0,
count(good=1) as c1,
count(good=2) as c2,
count(good) as c3,
count(null) as c4
from foods
group by culture
order by culture;
Outcome:
culture |c0|c1|c2|c3|c4|
--------+--+--+--+--+--+
| 1| 1| 1| 1| 0|
American| 2| 2| 2| 2| 0|
Chinese | 1| 1| 1| 1| 0|
European| 1| 1| 1| 1| 0|
French | 2| 2| 2| 2| 0|
German | 1| 1| 1| 1| 0|
Italian | 2| 2| 2| 2| 0|
Update: This is similar to your question: Is it possible to specify condition in Count()?.

SQL query to find an output table

I have three dimension tables and a fact table and i need to write a query in such way that i join all the dimension columns with fact table to find out top 10 ATMs where most transactions are in the ’inactive’ state.I try the query with cartesian join but i dont know if this is the right way to join the tables.
select a.atm_number,a.atm_manufacturer,b.location,count(c.trans_id) as total_transaction_count,count(c.atm_status) as inactive_count
from dimen_atm a,dimen_location b,fact_atm_trans c
where a.atm_id = c.atm_id and b.location = c.location
order by inactive_count desc limit 10;
dimen_card_type
+------------+---------+
|card_type_id|card_type|
+------------+---------+
| 1| CIRRUS|
| 2| Dankort|
dimen_atm
+------+----------+----------------+---------------+
|atm_id|atm_number|atm_manufacturer|atm_location_id|
+------+----------+----------------+---------------+
| 1| 1| NCR| 16|
| 2| 2| NCR| 64|
+------+----------+----------------+---------------+
dimen_location
+-----------+--------------------+----------------+-------------+-------+------+------+
|location_id| location| streetname|street_number|zipcode| lat| lon|
+-----------+--------------------+----------------+-------------+-------+------+------+
| 1|Intern København|Rådhuspladsen| 75| 1550|55.676|12.571|
| 2| København| Regnbuepladsen| 5| 1550|55.676|12.571|
+-----------+--------------------+----------------+-------------+-------+------+------+
fact_atm_trans
+--------+------+--------------+-------+------------+----------+--------+----------+------------------+------------+------------+-------+----------+----------+------------+-------------------+
|trans_id|atm_id|weather_loc_id|date_id|card_type_id|atm_status|currency| service|transaction_amount|message_code|message_text|rain_3h|clouds_all|weather_id|weather_main|weather_description|
+--------+------+--------------+-------+------------+----------+--------+----------+------------------+------------+------------+-------+----------+----------+------------+-------------------+
| 1| 1| 16| 5229| 3| Active| DKK|Withdrawal| 5980| null| null| 0.0| 80| 803| Clouds| broken cloudsr|
| 2| 1| 16| 4090| 10| Active| DKK|Withdrawal| 3992| null| null| 0.0| 32| 802| Clouds| scattered cloudsr|
+--------+------+--------------+-------+------------+----------+--------+----------+------------------+------------+-----------

Spark SQL: Is there a way to distinguish columns with same name?

I have a csv with a header with columns with same name.
I want to process them with spark using only SQL and be able to refer these columns unambiguously.
Ex.:
id name age height name
1 Alex 23 1.70
2 Joseph 24 1.89
I want to get only first name column using only Spark SQL
As mentioned in the comments, I think that the less error prone method would be to have the schema of the input data changed.
Yet, in case you are looking for a quick workaround, you can simply index the duplicated names of the columns.
For instance, let's create a dataframe with three id columns.
val df = spark.range(3)
.select('id * 2 as "id", 'id * 3 as "x", 'id, 'id * 4 as "y", 'id)
df.show
+---+---+---+---+---+
| id| x| id| y| id|
+---+---+---+---+---+
| 0| 0| 0| 0| 0|
| 2| 3| 1| 4| 1|
| 4| 6| 2| 8| 2|
+---+---+---+---+---+
Then I can use toDF to set new column names. Let's consider that I know that only id is duplicated. If we don't, adding the extra logic to figure out which columns are duplicated would not be very difficult.
var i = -1
val names = df.columns.map( n =>
if(n == "id") {
i+=1
s"id_$i"
} else n )
val new_df = df.toDF(names : _*)
new_df.show
+----+---+----+---+----+
|id_0| x|id_1| y|id_2|
+----+---+----+---+----+
| 0| 0| 0| 0| 0|
| 2| 3| 1| 4| 1|
| 4| 6| 2| 8| 2|
+----+---+----+---+----+

Add aggregated columns to pivot without join

Considering the table:
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df.show()
+---+-----+---------+
| id|error|timestamp|
+---+-----+---------+
| 1| 1| 1|
| 5| 0| 2|
| 27| 1| 1|
| 1| 0| 3|
| 5| 1| 1|
| 1| 0| 2|
+---+-----+---------+
I would like to make a pivot on timestamp column keeping some other aggregated information from the original table. The result I am interested in can be achieved by
df1=df.groupBy('id').agg(sf.sum('error').alias('Ne'),sf.count('*').alias('cnt'))
df2=df.groupBy('id').pivot('timestamp').agg(sf.count('*')).fillna(0)
df1.join(df2, on='id').filter(sf.col('cnt')>1).show()
with the resulting table:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
However, there are at least two issues with the mentioned solution:
I am filtering by cnt at the end of the script. If I would be able to do this at the beginning, I can avoid almost all processing, because a large portion of data is removed using this filtration. Is there any way how to do this excepting collect and isin methods?
I am doing groupBy on id two-times. First, to aggregate some columns I need in results and the second time to get the pivot columns. Finally, I need join to merge these columns. I feel that I am surely missing some solution because it should be possible to do this with just one groubBy and without join, but I cannot figure out, how to do this.
I think you can not get around the join, because the pivot will need the timestamp values and the first grouping should not consider them. So if you have to create the NE and cnt values you have to group the dataframe only by id which results in the loss of timestamp if you want to preserve the values in columns you have to do the pivot as you did separately and join it back.
The only improvement that can be done is to move the filter to the df1 creation. So as you said this could already improve the performance since df1 should be much smaller after the filtering for your real data.
from pyspark.sql.functions import *
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df1=df.groupBy('id').agg(sum('error').alias('Ne'),count('*').alias('cnt')).filter(col('cnt')>1)
df2=df.groupBy('id').pivot('timestamp').agg(count('*')).fillna(0)
df1.join(df2, on='id').show()
Output:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
Actually it is indeed possible to avoid join using Window as
w1 = Window.partitionBy('id')
w2 = Window.partitionBy('id', 'timestamp')
df.select('id', 'timestamp',
sf.sum('error').over(w1).alias('Ne'),
sf.count('*').over(w1).alias('cnt'),
sf.count('*').over(w2).alias('cnt_2')
).filter(sf.col('cnt')>1) \
.groupBy('id', 'Ne', 'cnt').pivot('timestamp').agg(sf.first('cnt_2')).fillna(0).show()

why sql code work and scala code like sql don't work? (use left join and several date)

I have sql code which works perfectly:
val sql ="""
select a.*,
b.fOOS,
b.prevD
from dataFrame as a
left join dataNoPromoFOOS as b on
a.shopId = b.shopId and a.skuId = b.skuId and
a.Date > b.date and a.date <= b.prevD
"""
result:
+------+------+----------+-----+-----+------------------+---+----------+------------------+----------+
|shopId| skuId| date|stock|sales| salesRub| st|totalPromo| fOOS| prevD|
+------+------+----------+-----+-----+------------------+---+----------+------------------+----------+
| 200|154057|2017-03-31|101.0| 49.0| 629.66| 1| 0|58.618803952304724|2017-03-31|
| 200|154057|2017-09-11|116.0| 76.0| 970.67| 1| 0| 63.3344597217295|2017-09-11|
| 200|154057|2017-11-10| 72.0| 94.0| 982.4599999999999| 1| 0|59.019226118850405|2017-11-10|
| 200|154057|2018-10-08|126.0| 34.0| 414.44| 1| 0| 55.16878756270067|2018-10-08|
| 200|154057|2016-08-03|210.0| 27.0| 307.43| 1| 0|23.530049844711286|2016-08-03|
| 200|154057|2016-09-03| 47.0| 20.0| 246.23| 1| 0|24.656378380329674|2016-09-03|
| 200|154057|2016-12-31| 66.0| 30.0| 386.5| 1| 1| 26.0423103074891|2017-01-09|
| 200|154057|2017-02-28| 22.0| 61.0| 743.2899999999998| 1| 0| 54.86808157636879|2017-02-28|
| 200|154057|2017-03-16| 79.0| 41.0|505.40999999999997| 1| 0| 49.79449369431623|2017-03-16|
when i use scala this code don't work
dataFrame.join(dataNoPromoFOOS,
dataFrame("shopId") === dataNoPromoFOOS("shopId") &&
dataFrame("skuId") === dataNoPromoFOOS("skuId") &&
(dataFrame("date").lt(dataNoPromoFOOS("date"))) &&
(dataFrame("date").geq(dataNoPromoFOOS("prevD"))) ,
"left"
).select(dataFrame("*"),dataNoPromoFOOS("fOOS"),dataNoPromoFOOS("prevD"))
result:
+------+------+----------+-----+-----+------------------+---+----------+----+-----+
|shopId| skuId| date|stock|sales| salesRub| st|totalPromo|fOOS|prevD|
+------+------+----------+-----+-----+------------------+---+----------+----+-----+
| 200|154057|2016-09-24|288.0| 34.0| 398.66| 1| 0|null| null|
| 200|154057|2017-06-11| 40.0| 38.0| 455.32| 1| 1|null| null|
| 200|154057|2017-08-18| 83.0| 20.0|226.92000000000002| 1| 1|null| null|
| 200|154057|2018-07-19|849.0| 58.0| 713.12| 1| 0|null| null|
| 200|154057|2018-08-11|203.0| 52.0| 625.74| 1| 0|null| null|
| 200|154057|2016-09-01|120.0| 24.0| 300.0| 1| 1|null| null|
| 200|154057|2016-12-22| 62.0| 30.0| 378.54| 1| 0|null| null|
| 200|154057|2017-05-11|105.0| 49.0| 597.12| 1| 0|null| null|
| 200|154057|2016-12-28| 3.0| 36.0| 433.11| 1| 1|null| null|
somebody know why sql code work and scala code don't join left table.
i think it's the date column, but i don't undestand how i can find my error