Query from two tables have two different date fields - sql

I have two tables one for receiving "PO_RECVR_HIST" and other table for sales "PS_TKT_HIST_LIN". I want to create a query showing the total for receiving and total for sales. The two dates are not related the results come wrong. The two tables has the same vendor. I am using the following query
SELECT P.VEND_NO,
sum(P.RECVR_TOT)AS RECV_TOT,
sum(S.CALC_EXT_PRC) AS SAL_TOT
FROM PO_RECVR_HIST P INNER JOIN
PS_TKT_HIST_LIN S
ON P.VEND_NO = S.ITEM_VEND_NO
WHERE P.RECVR_DAT > getdate()-7
GROUP BY P.VEND_NO, S.BUS_DAT
HAVING S.BUS_DAT > getdate()-7
ORDER BY P.VEND_NO
Any advise please?

I think I get it. You want the sums in different columns. One approach is to do a UNION ALL to get the data together, and then aggregate:
SELECT VEND_NO,
SUM(RECVR_TOT)AS RECV_TOT,
SUM(CALC_EXT_PRC) AS SAL_TOT
FROM ((SELECT P.VEND_NO, P.RECVR_DAT as dte, P.RECVR_TOT, 0 as CALC_EXT_PRC
FROM PO_RECVR_HIST P
) UNION ALL
(SELECT S.ITEM_VEND_NO, S.BUS_DAT, 0, CALC_EXT_PRC
FROM PS_TKT_HIST_LIN S
)
) PS
WHERE DTE > GETDATE() - 7
GROUP BY P.VEND_NO
ORDER BY P.VEND_NO;
A JOIN is just not a good approach because it will throw off the aggregation.

Related

Aggregating based on GROUPING of multiple columns

I am trying to subquery and aggregate in SQL after doing an initial query with multiple joins. My ultimate goal is to get a count (or a sum) of specimens tested based on a grouping of multiple columns. This is slightly different from SQL Server query - Selecting COUNT(*) with DISTINCT and SQL Server: aggregate error on grouping.
The three tables that I use (PERSON, SPECIMEN, TEST), have 1-many relationships. So PERSON has many SPECIMENS and those SPECIMENS have many TESTS. I did three inner joins to combine these tables plus an additional table (ANALYSIS).
WITH TALLY as (
SELECT PERSON.NAME, PERSON.PHASE, TEST.DATE_STARTED, TEST.ANALYSIS, SPECIMEN.GROUP, TEST.STATUS,
ANALYSIS.ANALYSIS_TYPE, SPECIMEN.SPECIMEN_NUMBER
FROM DB.TEST
INNER JOIN
DB.SAMPLE ON
TEST.SPECIMEN_NUMBER = SPECIMEN.SPECIMEN_NUMBER
INNER JOIN
DB.PRODUCT ON
SPECIMEN.PERSON = PERSON.NAME
INNER JOIN
DB.ANALYSIS ON
TEST.ANALYSIS = ANALYSIS.NAME
WHERE PERSON.NAME = 'Joe'
AND TEST.DATE_STARTED >= '20-DEC-16' AND TEST.DATE_STARTED <='01-APR-18'
AND PERSON.PHASE = 'PHASE1'
ORDER BY TEST.DATE_STARTED)
SELECT COUNT(DISTINCT ANALYSIS) as SPECIMEN_COUNT, DATE_STARTED, ANALYSIS, STATUS, GROUP, ANALYSIS_TYPE
FROM TALLY
GROUP BY DATE_STARTED, ANALYSIS, STATUS, GROUP, ANALYSIS_TYPE
ORDER BY DATE_STARTED;
This gives me the repeated columns: first grouping repeated 4 times
What I am trying to see is: aggregated first grouping with total count
Any thoughts as to what is missing? SUM instead of COUNT or in addition to COUNT creates an error. Thanks in advance!
9/17/2020 Update: I have tried adding a subquery because I also need to use a new column of metadata (ANALYSIS_TYPE_ALIAS) which is created in the first query through a CASE STATEMENT(...). I have also tried using another subquery with inner join to count based on those conditions to a temp table, but still cannot seem to aggregate to flatten the table. Here is my current attempt:
WITH TALLY as (
SELECT PERSON.NAME, PERSON.PHASE, TEST.DATE_STARTED, TEST.ANALYSIS, SPECIMEN.GROUP, TEST.STATUS,
ANALYSIS.ANALYSIS_TYPE...
FROM DB.TEST
INNER JOIN
DB.SAMPLE ON
TEST.SPECIMEN_NUMBER = SPECIMEN.SPECIMEN_NUMBER
INNER JOIN
DB.PRODUCT ON
SPECIMEN.PERSON = PERSON.NAME
INNER JOIN
DB.ANALYSIS ON
TEST.ANALYSIS = ANALYSIS.NAME
WHERE PERSON.NAME = 'Joe'
AND TEST.DATE_STARTED >= '20-DEC-16' AND TEST.DATE_STARTED <='01-APR-18'
AND PERSON.PHASE = 'PHASE1'
ORDER BY TEST.DATE_STARTED),
SUMMARY_COMBO AS (SELECT DISTINCT(CONCAT(CONCAT(CONCAT(CONCAT(ANALYSIS, DATE_STARTED),STATUS), GROUP), ANALYSIS_TYPE_ALIAS))AS UUID,
TALLY.NAME, TALLY.PHASE, TALLY.DATE_STARTED, TALLY.ANALYSIS, TALLY.GROUP, TALLY.STATUS, TALLY.ANALYSIS_TYPE_ALIAS
FROM TALLY)
SELECT SUMMARY_COMBO.NAME, SUMMARY_COMBO.PHASE, SUMMARY_COMBO.DATE_STARTED, SUMMARY_COMBO.ANALYSIS,SUMMARY_COMBO.GROUP, SUMMARY_COMBO.STATUS, SUMMARY_COMBO.ANALYSIS_TYPE_ALIAS,
COUNT(SUMMARY_COMBO.ANALYSIS) OVER (PARTITION BY SUMMARY_COMBO.UUID) AS SPECIMEN_COUNT
FROM SUMMARY_COMBO
ORDER BY SUMMARY_COMBO.DATE_STARTED;
This gave me the following table Shows aggregated counts, but doesn't aggregate based on unique UUID. Is there a way to take the sum of the count? I've tried to do this by storing count to a subquery and then referencing that count variable, but I am missing something in how to group the 8 columns of data that I want to show + the count of that combination of columns.
Thanks!
Just remove analysis from the group by clause, since that's the column whose distinct values you want to count. Otherwise, the query generates more groups than what you need (and the count of distinct analysis values in each group is always 1).
WITH TALLY as ( ...)
SELECT COUNT(DISTINCT ANALYSIS) as SPECIMEN_COUNT, DATE_STARTED, ANALYSIS, STATUS, GROUP, ANALYSIS_TYPE
FROM TALLY
GROUP BY DATE_STARTED, STATUS, GROUP, ANALYSIS_TYPE
ORDER BY DATE_STARTED;

use multiple LEFT JOINs from multiple datasets SQL

I need to perform multiple JOINs, I am grabbing the data from multiple tables and JOINing on id. The tricky part is that one table I need to join twice. Here is the code:
(
SELECT
content.brand_identifier AS brand_name,
CAST(timestamp(furniture.date) AS DATE) AS order_date,
total_hearst_commission
FROM
`furniture_table` AS furniture
LEFT JOIN `content_table` AS content ON furniture.site_content_id = content.site_content_id
WHERE
(
timestamp(furniture.date) >= TIMESTAMP('2020-06-01 00:00:00')
)
)
UNION
(
SELECT
flowers.a_merchant_name AS merchant_name
FROM
`flowers_table` AS flowers
LEFT JOIN `content` AS content ON flowers.site_content_id = content.site_content_id
)
GROUP BY
1,
2,
3,
4
ORDER BY
4 DESC
LIMIT
500
I thought I could use UNION but it gives me an error Syntax error: Expected keyword ALL or keyword DISTINCT but got "("
I'm not able to comment, but like GHB states, the queries do not have the same number of columns; therefore, UNION will not work here.
I think it would be helpful to know why sub-queries are needed in the first place. I'm guessing this query does not product the results you want, so please elaborate on why that is.
select
f.a_merchant_name as merchant_name,
c.brand_identifier as brand_name,
CAST(timestamp(f.date) AS DATE) AS order_date,
total_hearst_commission
from furniture_table f
left join content_table c on c.site_content_id = f.site_content_id
where timestamp(f.date) >= TIMESTAMP('2020-06-01 00:00:00')
group by 1,2,3,4

SQL (Subqueries)

Struggling to go the extra step with a SQL query I'd like to run.
I have a customer database with a Customer table with the date/time detail of when the customer joined and a transaction table with details of their transactions of the years
What I'd like to do is to Group by the Join Date (as Year) and count the number that joined in each year then in the next column I'd like to then count the number who have transacted in a specific year E.g. 2016 the current year. This way I can show customer retention over the years.
Both tables are linked by a customer URN, but I am struggling to get my head around the the most efficient way to show this. I can easily count and group the members by joined year and I can display the max dated transaction but I am struggling to bring the two together. I think I need to use sub queries and a left join but it's alluding me.
Example output column headers with data
Year_Joined = 2009
Joiner_Count = 10
Transact_in_2016 = 5
Where I am syntax-wise. I know this is no where near complete. As I need to group by DateJoined and then sub query the count of customers of have transacted in 2016?
SELECT Customer.URNCustomer,
MAX(YEAR(Customer.DateJoined)),
MAX(YEAR(Tran.TranDate)) As Latest_Tran,
FROM Mydatabase.dbo.Customer
LEFT JOIN Mydatabase.dbo.Tran
ON Tran.URNCustomer = Customer.URNCustomer
GROUP BY Customer.URNCustomer
ORDER BY Customer.URNCustomer
The best approach is to do the aggregation before doing the joins. You want to count two different things, so count them individually and them combine them.
The following uses full outer join. This handles the case where there are years with no new customers and years with no transactions:
select coalesce(c.yyyy, t.yyyy) as yyyy,
coalesce(c.numcustomers, 0) as numcustomers,
coalesce(t.numtransactions, 0) as numtransactions
from (select year(c.datejoined) as yyyy, count(*) as numcustomers
from Mydatabase.dbo.Customer c
group by year(c.datejoined)
) c full outer join
(select year(t.trandate) as yyyy, count(*) as numtransactions
from database.dbo.Tran t
group by year(t.trandate)
) t
on c.yyyy = t.yyyy;
You may want to try something like this:
SELECT YEAR(Customer.DateJoined),
COUNT( Customer.URNCustomer ),
COUNT( DISTINCT Tran.URNCustomer ) AS NO_ACTIVE_IN_2016
FROM Mydatabase.dbo.Customer
LEFT Mydatabase.dbo.Tran
ON Tran.URNCustomer = Customer.URNCustomer
AND YEAR(Tran.TranDate) = 2016
GROUP BY YEAR(Customer.DateJoined)

SQL Server get customer with 7 consecutive transactions

I am trying to write a query that would get the customers with 7 consecutive transactions given a list of CustomerKeys.
I am currently doing a self join on Customer fact table that has 700 Million records in SQL Server 2008.
This is is what I came up with but its taking a long time to run. I have an clustered index as (CustomerKey, TranDateKey)
SELECT
ct1.CustomerKey,ct1.TranDateKey
FROM
CustomerTransactionFact ct1
INNER JOIN
#CRTCustomerList dl ON ct1.CustomerKey = dl.CustomerKey --temp table with customer list
INNER JOIN
dbo.CustomerTransactionFact ct2 ON ct1.CustomerKey = ct2.CustomerKey -- Same Customer
AND ct2.TranDateKey >= ct1.TranDateKey
AND ct2.TranDateKey <= CONVERT(VARCHAR(8), (dateadd(d, 6, ct1.TranDateTime), 112) -- Consecutive Transactions in the last 7 days
WHERE
ct1.LogID >= 82800000
AND ct2.LogID >= 82800000
AND ct1.TranDateKey between dl.BeginTranDateKey and dl.EndTranDateKey
AND ct2.TranDateKey between dl.BeginTranDateKey and dl.EndTranDateKey
GROUP BY
ct1.CustomerKey,ct1.TranDateKey
HAVING
COUNT(*) = 7
Please help make it more efficient. Is there a better way to write this query in 2008?
You can do this using window functions, which should be much faster. Assuming that TranDateKey is a number and you can subtract a sequential number from it, then the difference constant for consecutive days.
You can put this in a query like this:
SELECT CustomerKey, MIN(TranDateKey), MAX(TranDateKey)
FROM (SELECT ct.CustomerKey, ct.TranDateKey,
(ct.TranDateKey -
DENSE_RANK() OVER (PARTITION BY ct.CustomerKey, ct.TranDateKey)
) as grp
FROM CustomerTransactionFact ct INNER JOIN
#CRTCustomerList dl
ON ct.CustomerKey = dl.CustomerKey
) t
GROUP BY CustomerKey, grp
HAVING COUNT(*) = 7;
If your date key is something else, there is probably a way to modify the query to handle that, but you might have to join to the dimension table.
This would be a perfect task for a COUNT(*) OVER (RANGE ...), but SQL Server 2008 supports only a limited syntax for Windowed Aggregate Functions.
SELECT CustomerKey, MIN(TranDateKey), COUNT(*)
FROM
(
SELECT CustomerKey, TranDateKey,
dateadd(d,-ROW_NUMBER()
OVER (PARTITION BY CustomerKey
ORDER BY TranDateKey),TranDateTime) AS dummyDate
FROM CustomerTransactionFact
) AS dt
GROUP BY CustomerKey, dummyDate
HAVING COUNT(*) >= 7
The dateadd calculates the difference between the current TranDateTime and a Row_Number over all date per customer. The resulting dummyDatehas no actual meaning, but is the same meaningless date for consecutive dates.

Unpivot date columns to a single column of a complex query in Oracle

Hi guys, I am stuck with a stubborn problem which I am unable to solve. Am trying to compile a report wherein all the dates coming from different tables would need to come into a single date field in the report. Ofcourse, the max or the most recent date from all these date columns needs to be added to the single date column for the report. I have multiple users of multiple branches/courses for whom the report would be generated.
There are multiple blogs and the latest date w.r.t to the blogtitle needs to be grouped, i.e. max(date_value) from the six date columns should give the greatest or latest date for that blogtitle.
Expected Result:
select u.batch_uid as ext_person_key, u.user_id, cm.batch_uid as ext_crs_key, cm.crs_id, ir.role_id as
insti_role, (CASE when b.JOURNAL_IND = 'N' then
'BLOG' else 'JOURNAL' end) as item_type, gm.title as item_name, gm.disp_title as ITEM_DISP_NAME, be.blog_pk1 as be_blogPk1, bc.blog_entry_pk1 as bc_blog_entry_pk1,bc.pk1,
b.ENTRY_mod_DATE as b_ENTRY_mod_DATE ,b.CMT_mod_DATE as BlogCmtModDate, be.CMT_mod_DATE as be_cmnt_mod_Date,
b.UPDATE_DATE as BlogUpDate, be.UPDATE_DATE as be_UPDATE_DATE,
bc.creation_date as bc_creation_date,
be.CREATOR_USER_ID as be_CREATOR_USER_ID , bc.creator_user_id as bc_creator_user_id,
b.TITLE as BlogTitle, be.TITLE as be_TITLE,
be.DESCRIPTION as be_DESCRIPTION, bc.DESCRIPTION as bc_DESCRIPTION
FROM users u
INNER JOIN insti_roles ir on u.insti_roles_pk1 = ir.pk1
INNER JOIN crs_users cu ON u.pk1 = cu.users_pk1
INNER JOIN crs_mast cm on cu.crsmast_pk1 = cm.pk1
INNER JOIN blogs b on b.crsmast_pk1 = cm.pk1
INNER JOIN blog_entry be on b.pk1=be.blog_pk1 AND be.creator_user_id = cu.pk1
LEFT JOIN blog_CMT bc on be.pk1=bc.blog_entry_pk1 and bc.CREATOR_USER_ID=cu.pk1
JOIN gradeledger_mast gm ON gm.crsmast_pk1 = cm.pk1 and b.grade_handler = gm.linkId
WHERE cu.ROLE='S' AND BE.STATUS='2' AND B.ALLOW_GRADING='Y' AND u.row_status='0'
AND u.available_ind ='Y' and cm.row_status='0' and and u.batch_uid='userA_157'
I am getting a resultset for the above query with multiple date columns which I want > > to input into a single columnn. The dates have to be the most recent, i.e. max of the dates in the date columns.
I have successfully done the Unpivot by using a view to store the above
resultset and put all the dates in one column. However, I do not
want to use a view or a table to store the resultset and then do
Unipivot simply because I cannot keep creating views for every user
one would query for.
The max(date_value) from the date columns need to be put in one single column. They are as follows:
* 1) b.entry_mod_date, 2) b.cmt_mod_date ,3) be.cmt_mod_date , 4) b.update_Date ,5) be.update_date, 6) bc.creation_date *
Apologies that I could not provide the desc of all the tables and the
fields being used.
Any help to get the above mentioned max of the dates from these
multiple date columns into a single column without using a view or a
table would be greatly appreciated.*
It is not clear what results you want, but the easiest solution is to use greatest().
with t as (
YOURQUERYHERE
)
select t.*,
greatest(entry_mod_date, cmt_mod_date, cmt_mod_date, update_Date,
update_date, bc.creation_date
) as greatestdate
from t;
select <columns>,
case
when greatest (b_ENTRY_mod_DATE) >= greatest (BlogCmtModDate) and greatest(b_ENTRY_mod_DATE) >= greatest(BlogUpDate)
then greatest( b_ENTRY_mod_DATE )
--<same implementation to compare each time BlogCmtModDate and BlogUpDate separately to get the greatest then 'date'>
,<columns>
FROM table
<rest of the query>
UNION ALL
Select <columns>,
case
when greatest (be_cmnt_mod_Date) >= greatest (be_UPDATE_DATE)
then greatest( be_cmnt_mod_Date )
when greatest (be_UPDATE_DATE) >= greatest (be_cmnt_mod_Date)
then greatest( be_UPDATE_DATE )
,<columns>
FROM table
<rest of the query>
UNION ALL
Select <columns>,
GREATEST(bc_creation_date)
,<columns>
FROM table
<rest of the query>