SQL Inner Join returns duplicates - sql

I have the following 2 tables:
tab1 with 37146 rows
week_ref with 730 rows
All I want to do is join those tables on year and week so that the first week day and last week day will display next to the columns of the first table.
Below is my query:
SELECT tab1.year
,tab1.week
,tab1.col3
,tab1.col4
,tab1.col5
,tab1.col6
,tab1.total
,tab1.col7
,week_ref.first_week_day
,week_ref.last_week_day
FROM dtsetname.tab1
JOIN spyros.week_ref ON (week_ref.year = tab1.year AND week_ref.week = tab1.week)
The return of the query returns the 2 extra columns but the rows are 255535. So it is full of duplicates. I used to get how join works, but I guess not anymore xd... Any help on this? The correct output table should only give me 37146 rows since I only just want to add 2 extra columns.
Thanks

Below is for BigQuery Standard SQL
Before JOIN'ing you just need to dedup data in week_ref table as in below example
#standardSQL
SELECT tab1.year
,tab1.week
,tab1.col3
,tab1.col4
,tab1.col5
,tab1.col6
,tab1.total
,tab1.col7
,week_ref.first_week_day
,week_ref.last_week_day
FROM dtsetname.tab1 tab1
JOIN (SELECT DISTINCT year, week, first_week_day, last_week_day FROM spyros.week_ref) week_ref
ON (week_ref.year = tab1.year AND week_ref.week = tab1.week)

The problem is that your week_ref table has a row for each day rather than per week.
You can select just one day. If you have a weekday number or name (which I'm guessing that you do), that can be used:
FROM dtsetname.tab1 JOIN
spyros.week_ref wr
ON wr.year = tab1.year AND
wr.week = tab1.week AND
wr.dayname = 'Monday'
If such a column is not available, then you can either extract() the information or aggregate:
FROM dtsetname.tab1 JOIN
(SELECT ANY_VALUE(wr).*
FROM spyros.week_ref wr
GROUP BY wr.year, wr.week
) wr
ON wr.year = tab1.year AND
wr.week = tab1.week

first, I hope that year+week & year+day are primary keys in corresponding tables, otherwise the problem is there.
If so, here is another hint to check:
I notice that you join them by year and week, however, in the first table I see many 52 in a week column and in the second one 0 as a value.
There are only 52 weeks in year, plus a day, so is it possible you need to join by
week_ref.year = tab1.year AND week_ref.week = tab1.week+1

I think the solutions mentioned by others should work if you are looking to join to your reference table to get week start/end dates.
However, if you think your tab1 table has definite values in the week and year columns (and if I understand your data correctly) you can avoid the join altogether to get your desired results:
select
year
,week
,col3
,col4
,col5
,col6
,total
,col7
,date_sub(weekdate, interval IF(EXTRACT(DAYOFWEEK FROM weekdate) = 1, 6, EXTRACT(DAYOFWEEK FROM weekdate) - 1) day) as first_week_day
,date_add(date_sub(weekdate, interval IF(EXTRACT(DAYOFWEEK FROM weekdate) = 1, 6, EXTRACT(DAYOFWEEK FROM weekdate) - 1) day), interval 6 day) as last_week_day
from (
select
tab1.year
,tab1.week
,tab1.col3
,tab1.col4
,tab1.col5
,tab1.col6
,tab1.total
,tab1.col7
date_add(date(cast(tab1.year as int64), 1, 1), interval cast(tab1.week as int64) week) as weekdate
from `mydataset.tab1` as tab1
)
Hope it helps :)

Related

How to Average Number of Chats per Day on LEFT JOIN table in Snowflake SQL?

In Snowflake SQL dictation, how do I average the number of video chats per day using a field from a table I left joined to the entire query?
I'm thinking I have to do a SUM function to total the number of video chats and then aggregate by # of video chats for each date and then divide by 30 days (the rolling date range I specified throughout my entire query).
Any help would be appreciated as deadlines are approaching. Thank you.
SELECT DISTINCT
t1."pid",
IFNULL(t2."VideoChats",0),
t3."SFUser",
t3."TotalProviders",
t4."dimaccount.practice_specialty",
t5."Account: CMRR",
t6."CreatedDate",
t7."stg_sf_case.Date_Time_Resolved__c",
t8."stg_sf_case.Closed_Date",
t9."pid"
FROM (SELECT "pid"
FROM "EDW_PROD"."PUBLIC"."STG_MYSQL_PROVIDERMODULES" AS a
WHERE a."active"
AND a."status" = 'PURCHASED'
AND a."module_id" = '14'
GROUP BY a."pid"
) t1
LEFT JOIN (SELECT "started_at",
"pid",
COUNT(*) AS "VideoChats"
FROM "EDW_PROD"."PUBLIC"."STG_MYSQL_VIDEOCHATROOM" AS b
LEFT JOIN "EDW_PROD"."PUBLIC"."DIMACCOUNT" AS dimaccount
ON b."pid" = dimaccount."PID"
WHERE b."started_at" >= DATE_TRUNC('month', CURRENT_DATE())
AND b."started_at" < DATEADD('month', 1, DATE_TRUNC('month', CURRENT_DATE()))
AND dimaccount."CurrentRow" = 'Y'
GROUP BY b."pid", b."started_at"
) t2 ON t1."pid" = t2."pid"
For a rolling average you probably want to use a window function. Something along these lines.
SELECT AVG(VideoChats) over (partition by pid order by started_at rows between 30 preceding and current row) as AvgVideoChats
--I saw a post about AVG not allowing a sliding window, so you may have to do this instead
SELECT SUM(VideoChats) over (partition by pid order by started_at rows between 30 preceding and current row) / 30. as AvgVideoChats
You may need to do this in a wrapper around your t2 query and adjust your date filters so that there are values available for averaging, but I'm not quite clear enough on what your query is doing with dates, or what results you are looking for, to be sure.

Calculating business days in Teradata

I need help in business days calculation.
I've two tables
1) One table ACTUAL_TABLE containing order date and contact date with timestamp datatypes.
2) The second table BUSINESS_DATES has each of the calendar dates listed and has a flag to indicate weekend days.
using these two tables, I need to ensure business days and not calendar days (which is the current logic) is calculated between these two fields.
My thought process was to first get a range of dates by comparing ORDER_DATE with TABLE_DATE field and then do a similar comparison of CONTACT_DATE to TABLE_DATE field. This would get me a range from the BUSINESS_DATES table which I can then use to calculate count of days, sum(Holiday_WKND_Flag) fields making the result look like:
Order# | Count(*) As DAYS | SUM(WEEKEND DATES)
100 | 25 | 8
However this only works when I use a specific order number and cant' bring all order numbers in a sub query.
My Query:
SELECT SUM(Holiday_WKND_Flag), COUNT(*) FROM
(
SELECT
* FROM
BUSINESS_DATES
WHERE BUSINESS.Business BETWEEN (SELECT ORDER_DATE FROM ACTUAL_TABLE
WHERE ORDER# = '100'
)
AND
(SELECT CONTACT_DATE FROM ACTUAL_TABLE
WHERE ORDER# = '100'
)
TEMP
Uploading the table structure for your reference.
SELECT ORDER#, SUM(Holiday_WKND_Flag), COUNT(*)
FROM business_dates bd
INNER JOIN actual_table at ON bd.table_date BETWEEN at.order_date AND at.contact_date
GROUP BY ORDER#
Instead of joining on a BETWEEN (which always results in a bad Product Join) followed by a COUNT you better assign a bussines day number to each date (in best case this is calculated only once and added as a column to your calendar table). Then it's two Equi-Joins and no aggregation needed:
WITH cte AS
(
SELECT
Cast(table_date AS DATE) AS table_date,
-- assign a consecutive number to each busines day, i.e. not increased during weekends, etc.
Sum(CASE WHEN Holiday_WKND_Flag = 1 THEN 0 ELSE 1 end)
Over (ORDER BY table_date
ROWS Unbounded Preceding) AS business_day_nbr
FROM business_dates
)
SELECT ORDER#,
Cast(t.contact_date AS DATE) - Cast(t.order_date AS DATE) AS #_of_days
b2.business_day_nbr - b1.business_day_nbr AS #_of_business_days
FROM actual_table AS t
JOIN cte AS b1
ON Cast(t.order_date AS DATE) = b1.table_date
JOIN cte AS b2
ON Cast(t.contact_date AS DATE) = b2.table_date
Btw, why are table_date and order_date timestamp instead of a date?
Porting from Oracle?
You can use this query. Hope it helps
select order#,
order_date,
contact_date,
(select count(1)
from business_dates_table
where table_date between a.order_date and a.contact_date
and holiday_wknd_flag = 0
) business_days
from actual_table a

Join two tables with not exactly matching dates in sql

I have a table with dates that are about 1 month before the other table.
For example,
one table reports 1st quarter end on March 31st
and the other reports 1st quarter end on February 28th (or 29th)
but it would be perfectly fine to join them together by the date regardless that the two dates arent exactly the same.
Any suggestions, please.
Thanks
You can join on DateDiff(dd, Date1, Date2) < x
Or to get more exact
select endOfMonth.*, begOfMonth.*
from endOfMonth join begOfMonth
on DATEADD (dd , 1 , endOfMonth.date ) = begOfMonth.Date
Your ON clause could look at year and quarter for a match:
ON TABLE1.YEAR([1st quarter end ]) = TABLE2.YEAR([1st quarter end ])
AND TABLE1.QUARTER([1st quarter end ]) = TABLE2.QUARTER([1st quarter end ])
select val1 From Table1 T1 inner Join Table2 t2 on MONTH(T1.date1) = MONTH(t2.date1)
And YEAR(T1.date1) = YEAR(t2.date1)
One approach would be to use the DATEPART() function that returns the quarter for any given date. Then you would be able to join on the returned quarter.
Sample SQL:
SELECT *
FROM
(
SELECT DATEPART(QUARTER,date_column) AS t1_quarter
FROM table1
UNION ALL
SELECT DATEPART(QUARTER,date_column) AS t2_quarter
FROM table2
) AS temp
WHERE temp.t1_quarter = temp.t2_quarter;
Put any other fields as you require (ID fields most probably) in the internal SELECTS.
If I rigthly understood you and you have the same number of column in those tables then you should use UNION in your SQL-query. See more information about UNION here: http://en.wikipedia.org/wiki/Set_operations_%28SQL%29.

Calculate closest working day in Postgres

I need to schedule some items in a postgres query based on a requested delivery date for an order. So for example, the order has a requested delivery on a Monday (20120319 for example), and the order needs to be prepared on the prior working day (20120316).
Thoughts on the most direct method? I'm open to adding a dates table. I'm thinking there's got to be a better way than a long set of case statements using:
SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40');
This gets you previous business day.
SELECT
CASE (EXTRACT(ISODOW FROM current_date)::integer) % 7
WHEN 1 THEN current_date-3
WHEN 0 THEN current_date-2
ELSE current_date-1
END AS previous_business_day
To have the previous work day:
select max(s.a) as work_day
from (
select s.a::date
from generate_series('2012-01-02'::date, '2050-12-31', '1 day') s(a)
where extract(dow from s.a) between 1 and 5
except
select holiday_date
from holiday_table
) s
where s.a < '2012-03-19'
;
If you want the next work day just invert the query.
SELECT y.d AS prep_day
FROM (
SELECT generate_series(dday - 8, dday - 1, interval '1d')::date AS d
FROM (SELECT '2012-03-19'::date AS dday) x
) y
LEFT JOIN holiday h USING (d)
WHERE h.d IS NULL
AND extract(isodow from y.d) < 6
ORDER BY y.d DESC
LIMIT 1;
It should be faster to generate only as many days as necessary. I generate one week prior to the delivery. That should cover all possibilities.
isodow as extract parameter is more convenient than dow to test for workdays.
min() / max(), ORDER BY / LIMIT 1, that's a matter of taste with the few rows in my query.
To get several candidate days in descending order, not just the top pick, change the LIMIT 1.
I put the dday (delivery day) in a subquery so you only have to input it once. You can enter any date or timestamp literal. It is cast to date either way.
CREATE TABLE Holidays (Holiday, PrecedingBusinessDay) AS VALUES
('2012-12-25'::DATE, '2012-12-24'::DATE),
('2012-12-26'::DATE, '2012-12-24'::DATE);
SELECT Day, COALESCE(PrecedingBusinessDay, PrecedingMondayToFriday)
FROM
(SELECT Day, Day - CASE DATE_PART('DOW', Day)
WHEN 0 THEN 2
WHEN 1 THEN 3
ELSE 1
END AS PrecedingMondayToFriday
FROM TestDays) AS PrecedingMondaysToFridays
LEFT JOIN Holidays ON PrecedingMondayToFriday = Holiday;
You might want to rename some of the identifiers :-).

SQL to identify missing week

I have a database table with the following structure -
Week_End Sales
2009-11-01 43223.43
2009-11-08 4324.23
2009-11-15 64343.23
...
Week_End is a datetime column, and the date increments by 7 days with each new entry.
What I want is a SQL statement that will identify if there is a week missing in the sequence. So, if the table contained the following data -
Week_End Sales
2009-11-01 43223.43
2009-11-08 4324.23
2009-11-22 64343.73
...
The query would return 2009-11-15.
Is this possible? I am using SQL Server 2008, btw.
You've already accepted an answer so I guess you don't need this, but I was almost finished with it anyway and it has one advantage that the selected solution doesn't have: it doesn't require updating every year. Here it is:
SELECT T1.*
FROM Table1 T1
LEFT JOIN Table1 T2
ON T2.Week_End = DATEADD(week, 1, T1.Week_End)
WHERE T2.Week_End IS NULL
AND T1.Week_End <> (SELECT MAX(Week_End) FROM Table1)
It is based on Andemar's solution, but handles the changing year too, and doesn't require the existence of the Sales column.
Join the table on itself to search for consecutive rows:
select a.*
from YourTable a
left join YourTable b
on datepart(wk,b.Week_End) = datepart(wk,a.Week_End) + 1
-- No next week
where b.sales is null
-- Not the last week
and datepart(wk,a.Week_End) <> (
select datepart(wk,max(Week_End)) from YourTable
)
This should return any weeks without a next week.
Assuming your "week_end" dates are always going to be the Sundays of the week, you could try a CTE - a common table expression that lists out all the Sundays for 2009, and then do an outer join against your table.
All those rows missing from your table will have a NULL value for their "week_end" in the select:
;WITH Sundays2009 AS
(
SELECT CAST('20090104' AS DATETIME) AS Sunday
UNION ALL
SELECT
DATEADD(DAY, 7, cte.Sunday)
FROM
Sundays2009 cte
WHERE
DATEADD(DAY, 7, cte.Sunday) < '20100101'
)
SELECT
sun.Sunday 'Missing week end date'
FROM
Sundays2009 sun
LEFT OUTER JOIN
dbo.YourTable tbl ON sun.Sunday = tbl.week_end
WHERE
tbl.week_end IS NULL
I know this has already been answered, but can I suggest something really simple?
/* First make a list of weeks using a table of numbers (mine is dbo.nums(num), starting with 1) */
WITH AllWeeks AS (
SELECT DATEADD(week,num-1,w.FirstWeek) AS eachWeek
FROM
dbo.nums
JOIN
(SELECT MIN(week_end) AS FirstWeek, MAX(week_end) as LastWeek FROM yourTable) w
ON num <= DATEDIFF(week,FirstWeek,LastWeek)
)
/* Now just look for ones that don't exist in your table */
SELECT w.eachWeek AS MissingWeek
FROM AllWeeks w
WHERE NOT EXISTS (SELECT * FROM yourTable t WHERE t.week_end = w.eachWeek)
;
If you know the range you want to look over, you don't need to use the MIN/MAX subquery in the CTE.