SQL Query - Joining, Adding and Lowest - sql

Not quite sure how to attack this so I'm putting the question out there, i can do normal select statements and group but this seems a little out of my realm. Joining all the Same Product ID on the same date, adding the total times, selecting the lowest best time, adding all the cycle counts, waiting for operator and production times.
My select statement is as so:
sql = "SELECT Product_ID, Date_Time, ulTotBoardCycleTime, ulBestBoardCycleTime, ulBoardCycleCount, ulWaitingForOperator, ulProductionTime FROM [i_import_general_timers] WHERE DATE_TIME >= #startdata2 AND DATE_TIME < #enddata2 AND ulBoardCycleCount > 0 ORDER BY Product_ID DESC"
Thanks,
Pete

You need to use the proper aggregation functions with a group by.
SELECT Product_ID, Date_Time, sum(ulTotBoardCycleTime) as TotalTime, min(ulBestBoardCycleTime) as BestTime, sum(ulBoardCycleCount) as CycleCount, sum(ulWaitingForOperator) as WaitingFor, sum(ulProductionTime) as ProductionTime
FROM [i_import_general_timers]
WHERE DATE_TIME >= #startdata2 AND DATE_TIME < #enddata2 AND ulBoardCycleCount > 0
GROUP by Product_ID, Date_Time
ORDER BY Product_ID DESC

Related

SQL to find when amount reached a certain value for the first time

I have a table that has 3 columns: user_id, date, amount. I need to find out on which date the amount reached 1 Million for the first time. The amount can go up or down on any given day.
I tried using partition by user_id order by date desc but I can't figure out how to find the exact date on which it reached 1 Million for the first time. I am exploring lead, lag functions. Any pointers would be appreciated.
You may use conditional aggregation as the following:
select user_id,
min(case when amount >= 1000000 then date end) as expected_date
from table_name
group by user_id
And if you want to check where the amount reaches exactly 1M, use case when amount = 1000000 ...
If you meant that the amount is a cumulative amount over the increasing of date, then query will be:
select user_id,
min(case when cumulative_amount >= 1000000 then date end) as expected_date
from
(
select *,
sum(amount) over (partition by user_id order by date) cumulative_amount
from table_name
) T
group by user_id;
Try this:
select date,
sum(amount) as totalamount
from tablename
group by date
having totalamount>=1000000
order by date asc
limit 1
This would summarize the amount for each day and return 1 record where it reached 1M for the first time.
Sample result on SQL Fiddle.
And if you want it to be grouped for both date and user_id, add user_id in select and group by clauses.
select user_id, date,
sum(amount) as totalamount
from tablename
group by user_id,date
having totalamount>=1000000
order by date asc
limit 1
Example here.

SQL - Counting users that have multiple transactions and have at least one transaction that has been made within 7 days interval of the other one

Dataset Here is the task : Count users that have multiple transactions and have at least one transaction that has been made within 7 days interval of the other one.
Structure of dataset: Row, userId, orderId, date
Date is formatted as YYYY-MM-DDTHH:MM:SS Example: 2016-09-16T11:32:06
I have completed the first part (counting users with multiple transactions), but I do not know how to do the second part in the same query. I will be thankful for help.
Here is the console:
query = '''
SELECT COUNT(*)
FROM
(SELECT userId FROM `dataset` GROUP BY userId HAVING COUNT(orderId) > 1)
'''
project_id = 'acdefg'
df = pd.io.gbq.read_gbq(query, project_id=project_id, dialect='standard')
display(df)
To solve this issue you want to be able to compare each record to a previous record: when was the last order from the same user. This hints to the use of partitions and window functions, in this case LAG.
A possible way to solve the problem is to organise records per user and order them by orderDate and then for each record have a look at the record just above:
WITH intermediate_table AS (
SELECT
userId,
orderDate,
LAG(orderDate)
OVER (PARTITION BY userId ORDER BY orderDate) -- this is where we pick the orderDate of the record right above, once the orders are organized by userId and ordered by orderDate
FROM `dataset.table`
)
SELECT userId
FROM intermediate_table
WHERE DATE_DIFF(orderDate, previous_order, DAY) <= 7
GROUP BY userId
Once orderDate and previous_order info are gathered in the same record, it's easy to compare them and see if there is less than 7 days between the two.
(GROUP BY is used for returning userIds only once in the resulting table)
This may be what you need:
-- for each order calculate the days since that customer's last order
order_profiler AS (
SELECT
orderId,
orderDate,
custId,
DATE_DIFF(orderDate, LAG(orderDate) OVER (PARTITION BY custId ORDER BY orderDate), day) AS order_latency_days,
FROM
`dataset.table`
)
SELECT
custId,
FROM order_profiler
WHERE order_latency_days <= 7
GROUP BY custId

Is it possible to look at two consecutive rows and determine the difference in time between the two using SQL?

I am relatively new to SQL, so please bear with me! I am trying to see how many customers make a purchase after being dormant for two years. Relevant fields include cust_id and purchase_date (there can be several observations for the same cust_id but with different dates). I am using Redshift for my SQL scripts.
I realize I cannot put the same thing in for the DATEDIFF parameters (it just doesn't make any sense), but I am unsure what else to do.
SELECT *
FROM tickets t
LEFT JOIN d_customer c
ON c.cust_id = t.cust_id
WHERE DATEDIFF(year, t.purchase_date, t.purchase_date) between 0 and 2
ORDER BY t.cust_id, t.purchase_date
;
I think you want lag(). To get the relevant tickets:
SELECT t.*
FROM (SELECT t.*,
LAG(purchase_date) OVER (PARTITION BY cust_id ORDER BY purchase_date) as prev_pd
FROM tickets t
) t
WHERE prev_pd < purchase_date - interval '2 year';
If you want the number of customers, use count(distinct):
SELECT COUNT(DISTINCT cust_id)
FROM (SELECT t.*,
LAG(purchase_date) OVER (PARTITION BY cust_id ORDER BY purchase_date) as prev_pd
FROM tickets t
) t
WHERE prev_pd < purchase_date - interval '2 year';
Note that these do not use DATEDIFF(). This counts the number of boundaries between two date values. So, 2018-12-31 and 2019-01-01 have a difference of 1 year.

Find rows with similar date values

I want to find customers where for example, system by error registered duplicates of an order.
It's pretty easy, if reg_date is EXACTLY the same but I have no idea how to implement it in query to count as duplicate if for example there was up to 1 second difference between transactions.
select * from
(select customer_id, reg_date, count(*) as cnt
from orders
group by 1,2
) x where cnt > 1
Here is example dataset:
https://www.db-fiddle.com/f/m6PhgReSQbVWVZhqe8n4mi/0
CUrrently only customer's 104 orders are counted as duplicates because its reg_date is identical, I want to count also orders 1,2 and 4,5 as there's just 1 second difference
demo:db<>fiddle
SELECT
customer_id,
reg_date
FROM (
SELECT
*,
reg_date - lag(reg_date) OVER (PARTITION BY customer_id ORDER BY reg_date) <= interval '1 second' as is_duplicate
FROM
orders
) s
WHERE is_duplicate
Use the lag() window function. It allows to have a look hat the previous record. With this value you can do a diff and filter the records where the diff time is more than one second.
Try this following script. This will return you day/customer wise duplicates.
SELECT
TO_CHAR(reg_date :: DATE, 'dd/mm/yyyy') reg_date,
customer_id,
count(*) as cnt
FROM orders
GROUP BY
TO_CHAR(reg_date :: DATE, 'dd/mm/yyyy'),
customer_id
HAVING count(*) >1

SQL Count Query Using Non-Index Column

I have a query similar to this, where I need to find the number of transactions a specific customer had within a time frame:
select customer_id, count(transactions)
from transactions
where customer_id = 'FKJ90838485'
and purchase_date between '01-JAN-13' and '31-AUG-13'
group by customer_id
The table transactions is not indexed on customer_id but rather another field called transaction_id. Customer_ID is character type while transaction_id is numeric.
'accounting_month' field is also indexed.. this field just stores the month that transactions occured... ie, purchase_date = '03-MAR-13' would have accounting_month = '01-MAR-13'
The transactions table has about 20 million records in the time frame from '01-JAN-13' and '31-AUG-13'
When I run the above query, it has taken more than 40 minutes to come back, any ideas or tips?
As others have already commented, the best is to add an index that will cover the query, So:
Contact the Database administrator and request that they add an index on (customer_id, purchase_date) because the query is doing a table scan otherwise.
Sidenotes:
Use date and not string literals (you may know that and do it already, still noted here for future readers)
You don't have to put the customer_id in the SELECT list and if you remove it from there, it can be removed from the GROUP BY as well so the query becomes:
select count(*) as number_of_transactions
from transactions
where customer_id = 'FKJ90838485'
and purchase_date between DATE '2013-01-01' and DATE '2013-08-31' ;
If you don't have a WHERE condition on customer_id, you can have it in the GROUP BY and the SELECT list to write a query that will count number of transactions for every customer. And the above suggested index will help this, too:
select customer_id, count(*) as number_of_transactions
from transactions
where purchase_date between DATE '2013-01-01' and DATE '2013-08-31'
group by customer_id ;
This is just an idea that came up to me. It might work, try running it and see if it is an improvement over what you currently have.
I'm trying to use the transaction_id, which you've said is indexed, as much as possible.
WITH min_transaction (tran_id)
AS (
SELECT MIN(transaction_ID)
FROM TRANSACTIONS
WHERE
CUSTOMER_ID = 'FKJ90838485'
AND purchase_date >= '01-JAN-13'
), max_transaction (tran_id)
AS (
SELECT MAX(transaction_ID)
FROM TRANSACTIONS
WHERE
CUSTOMER_ID = 'FKJ90838485'
AND purchase_date <= '31-AUG-13'
)
SELECT customer_id, count(transaction_id)
FROM transactions
WHERE
transaction_id BETWEEN min_transaction.tran_id AND max_transaction.tran_id
GROUP BY customer_ID
May be this will run faster since it look at the transaction_id for the range instead of the purchase_date. I also take in consideration that accounting_month is indexed :
select customer_id, count(*)
from transactions
where customer_id = 'FKJ90838485'
and transaction_id between (select min(transaction_id)
from transactions
where accounting_month = '01-JAN-13'
) and
(select max(transaction_id)
from transactions
where accounting_month = '01-AUG-13'
)
group by customer_id
May be you can also try :
select customer_id, count(*)
from transactions
where customer_id = 'FKJ90838485'
and accounting_month between '01-JAN-13' and '01-AUG-13'
group by customer_id