How do I iterate through subsets of a table in SQL - sql

I'm new to SQL and I would appreciate any advice!
I have a table that stores the history of an order. It includes the following columns: ORDERID, ORDERMILESTONE, NOTES, TIMESTAMP.
There is one TIMESTAMP for every ORDERMILESTONE in an ORDERID and vice versa.
What I want to do is compare the TIMESTAMPs for certain ORDERMILESTONEs to obtain the amount of time it takes to go from start to finish or from order to shipping, etc.
To get this, I have to gather all of the lines for a specific ORDERID, then somehow iterate through them... while I was trying to do this by declaring a TVP for each ORDERID, but this is just going to take more time because some of my datasets are like 20000 rows long.
What do you recommend? Thanks in advance.
EDIT:
In my problem, I want to find the number of days that the order spends in QA. For example, once an order is placed, we need to make the item requested and then send it to QA. so there's a milestone "Processing" and a milestone "QA". The item could be in "Processing" then "QA" once, and get shipped out, or it could be sent back to QA several times, or back and forth between "Processing" and "Engineering". I want to find the total amount of time that the item spends in QA.
Here's some sample data:
ORDERID | ORDERMILESTONE | NOTES | TIMESTAMP
43 | Placed | newly ordered custom time machine | 07-11-2020 12:00:00
43 | Processing | first time assembling| 07-11-2020 13:00:05
43 | QA | sent to QA | 07-11-2020 13:30:12
43 | Engineering | Engineering is fixing the crank on the time machine that skips even years | 07-12-2020 13:00:02
43 | QA | Sent to QA to test the new crank. Time machine should no longer skip even years. | 07-13-2020 16:00:18
0332AT | Placed | lightsaber custom made with rainbow colors | 07-06-2020 01:00:09
0332AT | Processing| lightsaber being built | 07-06-2020 06:00:09
0332AT | QA | lightsaber being tested | 07-06-2020 06:00:09
I want the total number of days that each order spends with QA.
So I suppose I could create a lookup table that has each QA milestone and its next milestone. Then sum up the difference between each QA milestone and the one that follows. My main issue is that I don't necessarily know how many times the item will need to be sent to QA on each order...

To get the hours to complete a specific mile stone of all orders you can do
select orderid,
DATEDIFF(hh, min(TIMESTAMP), max(TIMESTAMP))
from your_table
where ORDERMILESTONE = 'mile stone name'
group by orderid

Assuming you are using SQL Server and your milestones are not repeated, then you can use:
select om.orderid,
datediff(seconds, min(timestamp), max(timestamp))
from order_milestones om
where milestone in ('milestone1', 'milestone2')
group by om.orderid;
If you want to do this more generally on every row, you can use a cumulative aggregation function:
select om.*,
datediff(seconds,
timestamp,
min(case when milestone = 'order' then timestamp end) over
(partition by orderid
order by timestamp
rows between current row and unbounded following
)
) as time_to_order
from order_milestones om
group by om.orderid;

You can create a lookup table taking a milestone and giving you the previous milestone. Then you can left join to it and left join back to the original table to get the row for the same order at the previous milestone and compare the dates (sqlfiddle):
select om.*, datediff(minute, pm.TIMESTAMP, om.TIMESTAMP) as [Minutes]
from OrderMilestones om
left join MilestoneSequence ms on ms.ORDERMILESTONE = om.ORDERMILESTONE
left join OrderMilestones pm on pm.ORDERID = om.ORDERID
and pm.ORDERMILESTONE = ms.PREVIOUSMILESTONE
order by om.TIMESTAMP

Related

Avoiding roundtrips in the database caused by looping

I am using postgres and, I recently encountered that the code I am using has too many roundtrips.
What I am doing is basically getting data from a table on a daily basis because I have to look for changes on a daily basis, but the whole function that does this job is called once a month.
An example of my table
Amount
Id | Itemid | Amount | Date
1 | 2 | 50 | 20-5-20
Now this table can be updated to add items at any point in time and I have to see the total amount that is SUM(Amount) every day.
But here's the catch, I have to add interest to the amount of each day at the rate of 5%.
So I can't just once call the function, I have to look at its value every day.
For example if I add an item of 50$ on the 1st of may then the interest on that day is 5/100*50
I add another item on the 5th of may worth 50$ and now the interest on the 5th day is 5/100*50.
But prior to 5th, the interest was on only 50$ so If I just simply use SUM(Amount)*5/100. It is wrong.
Also, another issue is the fact that dates are stored as timestamps and I need to group it by date of the timestamp because if I group it on the basis of timestamp then it will create multiple rows for the same date which I want to avoid while taking the sum.
So if there are two entries on the same date but different hours ideally the query should sum it up as one single date.
Example
Amount Table
Date | Amount
2020-5-5 20:8:8 100
2020-5-5 7:8:8 | 100
Result should be
Amount Table
Date | Amount
2020-5-5 200
My current code.
for i in numberofdaysinthemonth:
amount = amount + session.query(func.sum(Amount.Amount)).filter(Amount.date<current_date).scalar() * 5/100
I want a query that gets all these values according to dates, for example
date | Sum of amount till that date
20-5-20 | 50
20-6-20 | 100
Any ideas about what I should do to avoid a loop that runs 30 times since the function is called once in a month.
I am supposed to get all this data in a table daywise and aggregated as the sum of amount for each day
That is a simple "running total"
select "date",
sum(amount) over (order by "date") as amount_til_date
from the_table
order by "date";
If you need the amount per itemid
select "date",
sum(amount) over (partition by itemid order by "date") as amount_til_date
from the_table
order by "date";
If you also need to calculate the "compound interest rate" up to that day, you can do that as well:
select item_id,
"date",
sum(amount) over (partition by itemid order by "date") as amount_til_date,
sum(amount) over (partition by item_id order by "date") * power(1.05, count(*) over (partition by item_id order by "date")) as compound_interest
from the_table
order by "date";
To get that for a specific month, add a WHERE clause:
where "date" >= date '2020-06-01'
and "date" < date '2020-07-01'
In general to avoid round trips between application and database, application code must be moved from application to database in stored code (stored procedures an stored functions) using a procedural language. This approach is sometimes called "thick database" in commercial databases like Oracle Database.
PostgreSQL default procedural language is pl/pgsql but you can use Java, Perl, Python, Javascript using PostgreSQL extensions that you would need to install in PostgreSQL.

Use 12 different date ranges in join statement-SQL without re-running the join 12 times

Supposedly I have 2 tables in the formats below:
Table 1: Visits
Customer | Website_visit_id | Time_of_visit
Table 2: Booking
Customer | Hotel_booking_id | Time_of_booking
I created a table that has customer's id, their booking, and all the visits they made to the website within 30 days prior to making the booking using this:
select customer, booking_id, website_visit_id
from visits a
join booking b
on a.customer = b.customer
and Time_of_visit between dateadd(days, -30, time_of_booking) and time_of_booking
My question is- if I want to expand this time frame to look at how many visits within 60,90,120...365 days prior to making the booking, how do I do that most efficiently (instead of having to run the join 12 times with the dateadd number changed? Is there a way to add a parameter in place of the '30' days in the join statement?
The output can be in the form of a list of booking and their corresponding visits/ timeframe, or a table of bookings and total of visits for each time frame (something that looks like this:
Time_frame| Customer | Hotel_booking_id | Number of visits
T30D | Mike | 1A | 5
T60D | Mike | 1A | 15
T90D | Mike | 1A | 22
T120D | Mike | 1A | 27
Thank you in advance
If you are looking to dynamically build a query, you should look to see if your DB supports dynamic queries/sql. Essentially, instead of running a query, you would run a stored procedure with logic to identify what your query parameters should be. From there, you would be able to dynamically create your query based on your preferred logic and execute it.
I'm not sure which DB you are using however I know that MySQL and Oracle DBs support this.
Please see links below for further information:
Oracle Documentation
MySQL Documentation

Two tables, Three prices, Different periods, What can i do?

I have tried and failed to solve this puzzle, and now im looking for some nice help if anyone is out there. Explanation:
First table consists of the original price on a item.
Table1 (Sales_Data):
+---------+----------+-------+------------+
| Item_Id | Store_Id | Price | Sales_Date |
+---------+----------+-------+------------+
Second table consists of two prices
one is if a store has another price on the item and this has a 0 value in To_Date
because this price should last forever.(Lets call this forever price)
one is if a store has another price on a item for just a period (02.03.2014-10.03.2014) lets call this discount
both prices is stored in Price, but the dates are the big difference.
Table2 (Discount_Data):
+---------+----------+-------+------------+
| Item_Id | Store_Id | Price | Sales_Date |
+---------+----------+-------+------------+
Now to the big Q:
Forever price should always overwrite original price
Discount price should always overwrite original/or forever price for the exact period
Item_Id, and Store_Id has to be the same.
How can i go forward to solve this? Can anyone help me on the way?
I hate to make an assumption, but it appears your second table should include two more columns, From_Date and To_Date. I will alias Discount_Data forever price as ddf and Discount_Data period price as ddp
select sd.Item_Id,
sd.Store_Id,
sd.Price as OriginalPrice,
coalesce(ddd.Price,ddf.Price,sd.Price) as UpdatedPrice
from Sales_Data sd
left join Discount_Data ddf
on sd.Item_Id=ddf.Item_Id
and sd.Store_Id=ddf.Store_Id
and sd.Sales_Date >= ddf.From_Date
and ddf.To_Date=0
left join Discount_Data ddp
on sd.Item_Id=ddp.Item_Id
and sd.Store_Id=ddp.Store_Id
and sd.Sales_Date between ddp.From_Date and ddp.To_Date
Hope that helps.

MS Access: Rank SUM() Values

I am working on an old web app that is still using MS Access as it's data source and I have ran into issue while trying to rank SUM() values.
Let's say I have 2 different account numbers each of those account numbers has an unknown number of invoices. I need to sum up the total of all the invoices, group it by account number then add a rank (1-2).
RAW TABLE EXAMPLE...
Account | Sales | Invoice Number
001 | 400 | 123
002 | 150 | 456
001 | 300 | 789
DESIRED RESULTS...
Account | Sales | Rank
001 | 700 | 1
002 | 150 | 2
I tried...
SELECT Account, SUM(Sales) AS Sales,
(SELECT COUNT(*) FROM Invoices) AS RANK
FROM Invoices
ORDER BY Account
But that query keeps returning the number of records assigned to that account and not a rank.
This would be easier in a report, with a running count: Report - Running Count within a Group
This is not standard in a query, but you can do something with custom functions (it's elaborate, but possible):
http://support.microsoft.com/kb/94397/en-us
Easiest way is to break it up in to 2 queries, the first one is this and I've saved it as qryInvoices:
SELECT Invoices.Account, Sum(Invoices.Sales) AS Sales
FROM Invoices
GROUP BY Invoices.Account;
And then the second query uses the first as follows:
SELECT qryInvoices.Account, qryInvoices.Sales, (SELECT Count(*) FROM qryInvoices AS I WHERE I.Sales > qryInvoices.Sales)+1 AS Rank
FROM qryInvoices
ORDER BY qryInvoices.Sales DESC;
I've tested this and got the desired results as outlined in the question.
Note: It may be possible to achieve in one query using a Defined table, but in this instance it was looking a little ugly.
If you need the answer in one query, it should be
SELECT inv.*, (
SELECT 1+COUNT(*) FROM (
SELECT Account, Sum(Sales) AS Sum_sales FROM Invoices GROUP BY Account
) WHERE Sum_sales > inv.Sum_sales
) AS Rank
FROM (
SELECT Account, Sum(Sales) AS Sum_sales FROM Invoices GROUP BY Account
) inv
I have tried it on Access and it works. You may also use different names for the two instances of "Sum_sales" above to avoid confusion (in which case you can drop the "inv." prefix).

Is there an established pattern for SQL queries which group by a range?

I've seen a lot of questions on SO concerning how to group data by a range in a SQL query.
The exact scenarios vary, but the general underlying problem in each is to group by a range of values rather than each discrete value in the GROUP BY column. In other words, to group by a less precise granularity than you're storing in the database table.
This crops up often in the real world when producing things like histograms, calendar representations, pivot tables and other bespoke reporting outputs.
Some example data (tables unrelated):
| OrderHistory | | Staff |
--------------------------- ------------------------
| Date | Quantity | | Age | Name |
--------------------------- ------------------------
|01-Jul-2012 | 2 | | 19 | Barry |
|02-Jul-2012 | 5 | | 53 | Nigel |
|08-Jul-2012 | 1 | | 29 | Donna |
|10-Jul-2012 | 3 | | 26 | James |
|14-Jul-2012 | 4 | | 44 | Helen |
|17-Jul-2012 | 2 | | 49 | Wendy |
|28-Jul-2012 | 6 | | 62 | Terry |
--------------------------- ------------------------
Now let's say we want to use the Date column of the OrderHistory table to group by weeks, i.e. 7-day ranges. Or perhaps group the Staff into 10-year age ranges:
| Week | QtyCount | | AgeGroup | NameCount |
-------------------------------- -------------------------
|01-Jul to 07-Jul | 7 | | 10-19 | 1 |
|08-Jul to 14-Jul | 8 | | 20-29 | 2 |
|15-Jul to 21-Jul | 2 | | 30-39 | 0 |
|22-Jul to 28-Jul | 6 | | 40-49 | 2 |
-------------------------------- | 50-59 | 1 |
| 60-69 | 1 |
-------------------------
GROUP BY Date and GROUP BY Age on their own won't do it.
The most common answers I see (none of which are consistently voted "correct") are to use one or more of:
a bunch of CASE statements, one per grouping
a bunch of UNION queries, with a different WHERE clause per grouping
as I'm working with SQL Server, PIVOT() and UNPIVOT()
a two-stage query using a sub-select, temp table or View construct
Is there an established generic pattern for dealing with such queries?
You can use some of the dimensional modeling techniques, such as fact tables and dimension tables. Order History can act as a fact table with DateKey foreign key relation to a Date dimension.
Date dimension can have a schema such as below:
Note that Date table is pre-filled with data up-to N number of years.
Using an example above, here is a sample query to get the result:
select CalendarWeek, sum(Quantity)
from OrderHistory a
join DimDate b
on a.DateKey = b.DateKey
group by CalendarWeek
For Staff table, you can store Birthday Key instead of age and let the query calculate the age and ranges.
Here is SQL Fiddle
Date dimension population script was taken from here.
As is often the case this SQL problem requires using more than one pattern in composition.
In this case the two you can use are
NTILE
Numbers Table
You can use NTITLE to create a set number of groups. However since you don't have each member of the groups represented you also need to use a numbers table Since you're using SQL Server you have it easy as you don't have to simulate either.
Here's an example for the Staff problem
WITH g as (
SELECT
NTILE(6) OVER (ORDER BY number) grp,
NUMBER
FROM
master..spt_values
WHERE
TYPE = 'P'
and number >=10 and number <=69
)
SELECT
CAST(min(g.number) as varchar) + ' - ' +
CAST(max(g.number) as varchar) AgeGroup ,
COUNT(s.age) NameCount
FROM
g
LEFT JOIN Staff s
ON g.NUMBER = s.Age
GROUP BY
grp
DEMO
You can apply this to dates as well it just requires some date to day maniplulation
Take a look at the OVER clause and its associated clauses: PARTITION BY, ROW, RANGE...
Determines the partitioning and ordering of a rowset before the
associated window function is applied. That is, the OVER clause
defines a window or user-specified set of rows within a query result
set. A window function then computes a value for each row in the
window. You can use the OVER clause with functions to compute
aggregated values such as moving averages, cumulative aggregates,
running totals, or a top N per group results.
My favorite case in this genre is where transactions must be grouped by fiscal quarter or fiscal year. The fiscal quarter or fiscal year boundaries of various enterprises can border on the bizarre.
My favorite way to implement this is to create a separate table for the attributes of a date. Let's call the table "Almanac". One of the columns in this table is the fiscal quarter, and another one is the fiscal year. The key to this table is of course the date. Ten years worth of data fill up 3,650 rows, plus a few for leap years. You then need a program that can populate this table from scratch. All the enterprise calendar rules are built into this one program.
When you need to group transaction data by fiscal quarter, you just join with this table over date, and then group by fiscal quarter.
I figure this pattern could be extended to groupings by other kinds of ranges, but I've never done it myself.
In your first example your intervals are regular so you can achieve the desired result simply by using functions. Below is an example that gets the data as you require it. The first query keeps the first column in date format (how I would preferably deal with it doing any formatting outside of SQL), the second does the string conversion for you.
DECLARE #OrderHistory TABLE (Date DATE, Quantity INT)
INSERT #OrderHistory VALUES
('20120701', 2), ('20120702', 5), ('20120708', 1), ('20120710', 3),
('20120714', 4), ('20120717', 2), ('20120728', 6)
SET DATEFIRST 7
SELECT DATEADD(DAY, 1 - DATEPART(WEEKDAY, Date), Date) AS WeekStart,
SUM(Quantity) AS Quantity
FROM #OrderHistory
GROUP BY DATEADD(DAY, 1 - DATEPART(WEEKDAY, Date), Date)
SELECT WeekStart,
SUM(Quantity) AS Quantity
FROM #OrderHistory
CROSS APPLY
( SELECT CONVERT(VARCHAR(6), DATEADD(DAY, 1 - DATEPART(WEEKDAY, Date), Date), 6) + ' to ' +
CONVERT(VARCHAR(6), DATEADD(DAY, 7 - DATEPART(WEEKDAY, Date), Date), 6) AS WeekStart
) ws
GROUP BY WeekStart
Something similar can be done for your age grouping using:
SELECT CAST(FLOOR(Age / 10.0) * 10 AS INT)
However this fails for 30-39 because there is no data for this group.
My stance on the matter would be, if you are doing the query as a one off, using a temp table, cte or case statement should work just fine, this should also extend to reusing the same query on small sets of data.
If you are likely to reuse the group however, or you are referring to significant amounts of data then create a permanent table with the ranges defined and indices applied to any columns required. This is the basis of creating dimensions in OLAP.
Couldn't you treat the age (or date) as a foreign key in a new, tiny table that is just ages (or dates) and their corresponding ranges? A join statement could provide a new table with a column that contains AgeGroups. With the new table you could use the standard group-by method.
It does seem reckless to make a new table for grouping, but it would be easy to make programatically and I think it would be easier to maintain (or drop and recreate) than a case statement or a where clause. If the result of this query is a one-off, a throwaway sql statement would probably work best, but I think my method makes the most sense for long-term use.
Well, some years ago with Oracle DB we did it the following way:
We had two tables: Sessions and Ranges. Ranges had foreign key that referenced Session.
When we needed to perform SQL, we created a new record in Sessions and several new records in Ranges that referred to that session.
Our SQL joined Ranges with filter by Session:
select sum(t.Value), r.Name
from DataTable t
join Ranges r on (r.Session = ? and r.Start t.MyDate)
group by r.Name
After we got results we deleted that record from Sessions and records from Ranges where deleted by cascade.
We had daemon job that purged Sessions from junk records that were leaked in case of extraordinary situation (killed processes, etc).
This worked perfectly. Since that time Oracle added new SQL clauses, and maybe they could be used instead. But on other RDBMSes this is still a valid way.
Another approach is to create a number of functions such as GET_YEAR_BY_DATE or GET_QUARTER_BY_DATE or GET_WEEK_BY_DATE (they would return start date of corresponding
period, for example, for any date return start date of year). And then group by them:
select sum(Value), GET_YEAR_BY_DATE(MyDate) from DataTable
group by GET_YEAR_BY_DATE(MyDate)