I have a sales data table with cust_ids and their transaction dates.
I want to create a table that stores, for every customer, their cust_id, their last purchased date (on the basis of transaction dates) and the count of times they have purchased.
I wrote this code:
SELECT
cust_xref_id, txn_ts,
DENSE_RANK() OVER (PARTITION BY cust_xref_id ORDER BY CAST(txn_ts as timestamp) DESC) AS rank,
COUNT(txn_ts)
FROM
sales_data_table
But I understand that the above code would give an output like this (attached example picture)
How do I modify the code to get an output like :
I am a beginner in SQL queries and would really appreciate any help! :)
This would be an aggregation query which changes the table key from (customer_id, date) to (customer_id)
SELECT
cust_xref_id,
MAX(txn_ts) as last_purchase_date,
COUNT(txn_ts) as count_purchase_dates
FROM
sales_data_table
GROUP BY
cust_xref_id
You are looking for last purchase date and count of distinct transaction dates ( like if a person buys twice, it should be considered as one single time).
Although you mentioned you want count of dates but sample data shows you want count of distinct dates - customer 284214 transacted 9 times but distinct will give you 7.
So, here is the SQL you can use to get your result.
SELECT
cust_xref_id,
MAX(txn_ts) as last_purchase_date,
COUNT(distinct txn_ts) as count_purchase_dates -- Pls note distinct will count distinct dates
FROM sales_data_table
GROUP BY 1
Related
We enter overrides based on a unique value from our tables (we have two columns with unique values for each transaction, so may or may not be primary key).
Sometimes we have to enter multiple overrides based on the same set of criteria, so it would be nice to be able to pull multiple unique values in one query that all meet the same criteria in the where clause as our system throws a warning if the same unique id is used for more than one override.
Say we have some customers that were under charged for three months and we need to enter a commission override for each of the three sales people that split the accounts for each month:
I've tried the following code, but the same value gets returned for each column:
select month, customer, product, sum(sales),
any_value(unique_id)unique_id1,
any_value(unique_id)unique_id2,
any_value(unique_id)unique_id3
from table
where customer in (j,k,l) and product = m and year = o
group by 1,2,3;
This will give me a row for each month and customer, but the values in unique_id1, unique_id2 and unique_id3 are the same on each row.
I was able to use:
select month, customer, product, sum(sales),
string_agg(unique_id, "," LIMIT 3)
from table
where customer in (j,k,l) and product = m and year = o
group by 1,2,3;
and split the unique_ids in a spreadsheet but I feel there has to be a better way to accomplish this directly in SQL.
I figure I could use a sub query and select column based on row 1,2,3, but I'm trying to eliminate the redundancy of including the same 'where' criteria in the sub query.
Beow is for BigQuery Standard SQL
I think you second query was close enough to get to something like below
#standardSQL
SELECT month, customer, product, sales,
arr[OFFSET(0)] unique_id1,
arr[SAFE_OFFSET(1)] unique_id2,
arr[SAFE_OFFSET(2)] unique_id3
FROM (
SELECT month, customer, product, SUM(sales) sales,
ARRAY_AGG(unique_id ORDER BY month DESC LIMIT 3) arr
FROM `project.dataset.table`
WHERE customer IN ('j','k','l') AND product = 'm' AND year = 2019
GROUP BY month, customer, product
)
Say I have a rather large table in a Teradata database, "Sales" that has a daily record for every sale and I want to write a SQL statement that limits this to the latest date only. This will not always be the previous day, for example, if it was a Monday the latest date would be the previous Friday.
I know I can get the results by the following:
SELECT s.*
FROM Sales s
JOIN (
SELECT MAX(SalesDate) as SalesDate
FROM Sales
) sd
ON s.SalesDate=sd.SalesDt
I am not knowledgable on how it would process the subquery and since Sales is a large table would there be a more efficient way to do this given there is not another table I could use?
Another (more flexible) way to get the top n utilizes OLAP-functions:
SELECT *
FROM Sales s
QUALIFY
RANK() OVER (ORDER BY SalesDate DESC) = 1
This will return all rows with the max date. If you want only one of them switch to ROW_NUMBER.
That is probably fine, if you have an index on salesdate.
If there is only one row, then I would recommend:
select top 1 s.*
from sales s
order by salesdate desc;
In particular, this should make use of an index on salesdate.
If there is more than one row, use top 1 with ties.
I have some records track inquires by DATETIME. There is an glitch in the system and sometimes a record will enter multiple times on the same day. I have a query with a bunch of correlated subqueries attached to these but the numbers are off because when there were those glitches in the system then these leads show up multiple times. I need the first entry of the day, I tried fooling around with MIN but I couldn't quite get it to work.
I currently have this, I am not sure if I am on the right track though.
SELECT SL.UserID, MIN(SL.Added) OVER (PARTITION BY SL.UserID)
FROM SourceLog AS SL
Here's one approach using row_number():
select *
from (
select *,
row_number() over (partition by userid, cast(added as date) order by added) rn
from sourcelog
) t
where rn = 1
You could use group by along with min to accomplish this.
Depending on how your data is structured if you are assigning a unique sequential number to each record created you could just return the lowest number created per day. Otherwise you would need to return the ID of the record with the earliest DATETIME value per day.
--Assumes sequential IDs
select
min(Id)
from
[YourTable]
group by
--the conversion is used to stip the time value out of the date/time
convert(date, [YourDateTime]
I've been requested by my superiors to write a query that will search every table in a database (each representative of a road and their total counts of traffic) and take the total counts by hour of motorcycles. Here's what I have so far whilst testing on one table:
WITH
totalCount AS
(
SELECT DATEDIFF(dd,0,event_time) AS DaySerial,
DATEPART(dd,event_time) AS theDay,
DATEDIFF(mm,0,event_time) AS MonthSerial,
DATEPART(mm,event_time) AS MonthofYear,
DATEDIFF(hh,0,event_time) AS HourSerial,
DATEPART(hh,event_time) AS Hour,
COUNT(*) AS HourlyCount,
DATEDIFF(yy,0,event_time) AS YearSerial,
DATEPART(yy,event_time) AS theYear
FROM [RUD].dbo.[10011E]
WHERE length <='1.7'
GROUP BY DATEDIFF(hh,0,event_time),
DATEPART(hh,event_time),
DATEDIFF(dd,0,event_time),
DATEPART(dd,event_time),
DATEDIFF(mm,0,event_time),
DATEPART(mm,event_time),
DATEDIFF(yy,0,event_time),
DATEPART(yy,event_time)
)
SELECT
theYear,
MonthofYear,
theDay,
Hour,
AVG(HourlyCount) AS Avg_Count
FROM
totalCount
GROUP BY
theYear,
MonthofYear,
theDay,
Hour
ORDER BY
theYear,
MonthofYear,
theDay,
Hour
Now I'm sure some of this is redundant or not needed, that's ok for now (I'm new to SQL btw, which is why some of this will be redundant). Basically as it stands, I list the year, month, date, hour and hourly count of motorcycles for one road. Now my two questions:
How do I take this query and make it so that it searches across every single table in the RUD database? Do I just need to list them all and UNION them, or is there a quicker way?
I realise if I search through every table gathering only the above (year, month, day, hour, hourly count) I will end up with the right data but with no way to distinguish which road all the counts are coming from. Is there a way to select the table ID (in this example, 10011E is the ID, and is the assigned name for a specific road) and place it in a column next to the rows that were selected from it?
If anyone needs clarification on what I mean, please let me know! Thanks!
One option would be to use UNION ALL and add an additional column for which source. You'll have to write out each of your tables in this case, but it's perhaps your fastest option:
SELECT ID, 'YourTable' TableName
FROM YourTable
UNION ALL
SELECT ID, 'YourOtherTable'
FROM YourOtherTable
....
Alternatively, dynamic sql could produce you the same results -- you might not have to type out all your table names, but it comes with a performance hit.
I have a table with several "ticket" records in it. Each ticket is stored by day (i.e. 2011-07-30 00:00:00.000) I would like to count the unique records in each month by year I have used the following sql statement
SELECT DISTINCT
YEAR(TICKETDATE) as TICKETYEAR,
MONTH(TICKETDATE) AS TICKETMONTH,
COUNT(DISTINCT TICKETID) AS DAILYTICKETCOUNT
FROM
NAT_JOBLINE
GROUP BY
YEAR(TICKETDATE),
MONTH(TICKETDATE)
ORDER BY
YEAR(TICKETDATE),
MONTH(TICKETDATE)
This does produce a count but it is wrong as it picks up the unique tickets for every day. I just want a unique count by month.
Try combining Year and Month into one field, and grouping on that new field.
You may have to cast them to varchar to ensure that they don't simply get added together. Or.. you could multiple through the year...
SELECT
(YEAR(TICKETDATE) * 100) + MONTH(TICKETDATE),
count(*) AS DAILYTICKETCOUNT
FROM NAT_JOBLINE GROUP BY
(YEAR(TICKETDATE) * 100) + MONTH(TICKETDATE)
Presuming that TICKETID is not a primary or unique key, but does appear multiple times in table NAT_JOBLINE, that query should work. If it is unique (does not occur in more than 1 row per value), you will need to select on a different column, one that uniquely identifies the "entity" that you want to count, if not each occurance/instance/reference of that entity.
(As ever, it is hard to tell without working with the actual data.)
I think you need to remove the first distinct. You already have the group by. If I was the first Distict I would be confused as to what I was supposed to do.
SELECT
YEAR(TICKETDATE) as TICKETYEAR,
MONTH(TICKETDATE) AS TICKETMONTH,
COUNT(DISTINCT TICKETID) AS DAILYTICKETCOUNT
FROM NAT_JOBLINE
GROUP BY YEAR(TICKETDATE), MONTH(TICKETDATE)
ORDER BY YEAR(TICKETDATE), MONTH(TICKETDATE)
From what I understand from your comments to Phillip Kelley's solution:
SELECT TICKETDATE, COUNT(*) AS DAILYTICKETCOUNT
FROM NAT_JOBLINE
GROUP BY TICKETDATE
should do the trick, but I suggest you update your question.