Trying to pull the required rows from the single table with applying conditional statements on columns in sql server? - sql

I have tried in n-number ways to solve this solution but unfortunately I got stuck in all the ways..
source table
id year jan feb mar apr may jun jul aug sep oct nov dec
1234 2014 05 06 12 15 16 17 18 19 20 21 22 23
1234 2013 05 06 12 15 16 17 18 19 20 21 22 23
Task: Assume that we are currently at March 2014, and we need 12 months back date ...(i.e., from Mar 2013 to Feb 2014, and the remaining values needs to be zero except year and id.)
Solution:
id year jan feb mar apr may jun jul aug sep oct nov dec
1234 2014 05 06 0 0 0 0 0 0 0 0 0 0
1234 2013 0 0 12 15 16 17 18 19 20 21 22 23
This needs a code solution for SQL Server 2008. I would be very happy if any body can solve this.
Note:
I got stuck to pull the column names dynamically.

You can try this:
select id, year, case when DATEDiff(month, getdate(), convert(datetime, year + '-01-01'))) < 12 then jan else 0,
DATEDiff(month, getdate(), convert(datetime, year + '-02-01'))) < 12 then fab else 0 ....

Related

Identify if date is the last date for any given group?

I have a table that is structured like the below - this contains details about all customer subscriptions and when they start/end.
SubKey
CustomerID
Status
StartDate
EndDate
29333
102
7
01 jan 2013
1 Jan 2014
29334
102
6
7 Jun 2013
15 Jun 2022
29335
144
6
10 jun 2021
17 jun 2022
29336
144
2
8 oct 2023
10 oct 2025
I am trying to add an indicator flag to this table (either "yes" or "no") which shows me by each row, if when the [EndDate] of the SubKey is the last one for that CustomerID. So for the above example..
SubKey
CustomerID
Status
StartDate
EndDate
IsLast
29333
102
7
01 jan 2013
1 Jan 2014
No
29334
102
6
7 Jun 2013
15 Jun 2022
Yes
29335
144
6
10 jun 2021
17 jun 2022
Yes
29336
144
2
8 oct 2023
10 oct 2025
Yes
The flag is set to No for the first row, because on 1 Jan 2014, customerID 102 had another SubKey (29334) still active at the time (which didn't end until 15 jun 2022)
The rest of the rows are set to "Yes" because these were the last active subscriptions per CustomerID.
I have been reading about the LAG function which may be able to help. I am just not sure how to make it fit in this scenario.
Probably the easiest method would to use exists with a correlation. Can you try the following for your desired results for excluding rows without an overlap:
select *,
case when exists (
select * from t t2
where t2.customerId = t.customerId
and t2.enddate > t.enddate
and t2.startDate < t.Enddate
) then 'No' else 'Yes' end as IsLast
from t;

Select row with most recent date per location and increment recent date by 1 for each row by location using MariaDB

I have a table of location which has 'Date column'. I have to find recent date by each group of locationID for e.g. locationID 1 has most recent date '31 May 2022'. After finding recent date from the group of locationID I have to add 14 days in that recent date and store it in NewDate column. and add + 1 in that new date for other row for that group of locationID.
My table is:
id locationID Date NewDate
1 1 31 May 2022
2 1 16 May 2022
3 1 28 Apr 2021
4 2 29 Mar 2022
5 2 22 Feb 2022
6 3 14 Jun 2022
7 3 27 Oct 2021
8 4 01 Feb 2022
9 4 04 May 2022
10 4 14 Jun 2021
11 5 01 Jun 2022
12 5 29 May 2022
13 5 20 Sep 2022
14 5 11 Aug 2022
15 5 03 Aug 2022
Answer should be as below:
For e.g. for locationID = 1
id locationID Date NewDate
1 1 31 May 2022 14 Jun 2022 // Recent Date + 14 Days - 31 May + 14 Days
2 1 16 May 2022 15 Jun 2022 // Recent Date + 15 Days - 31 May + 15 Days
3 1 28 Apr 2021 16 Jun 2022 // Recent Date + 16 Days - 31 May + 16 Days
I have come across few similar post and found recent date like this:
SELECT L.*
FROM Locations L
INNER JOIN
(SELECT locationID, MAX(Date) AS MAXdate
FROM Locations
GROUP BY locationID) groupedL
ON L.locationID = groupedL.locationID
AND L.Date = groupedL.MAXdate
using above code I am able to find recent date per location but how do I add and increment required days and store it to NewDate column ? I am new to MariaDB, please suggest similar post link, any reference documents or blogs. Should I make some function to perform this logic and call the function to store required dates in NewDate column? I am not sure please suggest. Thank you.
RESULT SHOULD LOOK LIKE BELOW:
id locationID Date NewDate
1 1 31 May 2022 14 Jun 2022 // Recent Date for locationid 1 + 14 Days - 31 May + 14 Days
2 1 16 May 2022 15 Jun 2022 // Recent Date for locationid 1 + 15 Days - 31 May + 15 Days
3 1 28 Apr 2021 16 Jun 2022 // Recent Date for locationid 1 + 16 Days - 31 May + 16 Days
4 2 29 Mar 2022 12 APR 2022 // Recent Date for locationid 2 + 14 Days
5 2 22 Feb 2022 13 APR 2022 // Recent Date for locationid 2 + 15 Days
6 3 14 Jun 2022 28 JUN 2022 // Recent Date for locationid 3 + 14 Days
7 3 27 Oct 2021 29 JUN 2022 // Recent Date for locationid 3 + 15 Days
8 4 01 Feb 2022 18 MAY 2022 // Recent Date for locationid 4 + 14 Days
9 4 04 May 2022 19 MAY 2022 // Recent Date for locationid 4 + 15 Days
10 4 14 Jun 2021 20 MAY 2022 // Recent Date for locationid 4 + 16 Days
11 5 01 Jun 2022 04 OCT 2022 // Recent Date for locationid 5 + 14 Days
12 5 29 May 2022 05 OCT 2022 // Recent Date for locationid 5 + 15 Days
13 5 20 Sep 2022 06 OCT 2022 // Recent Date for locationid 5 + 16 Days
14 5 11 Aug 2022 07 OCT 2022 // Recent Date for locationid 5 + 17 Days
15 5 03 Aug 2022 08 OCT 2022 // Recent Date for locationid 5 + 18 Days
You can use a cte:
with cte as (
select l1.*, l2.m, (select sum(l4.id < l1.id and l4.locationid = l1.locationid) from locations l4) inc from locations l1
join (select l3.locationid, max(l3.dt) m from locations l3 group by l3.locationid) l2 on l1.locationid = l2.locationid
)
select c.id, c.locationid, c.dt, c.m + interval 14 + c.inc day from cte c
You could use analytic window functions and update the original table by joining to a sub-query (works for MariaDB):
update t
join (
select Id,
Date_Add(First_Value(date) over(partition by locationId order by date desc),
interval (13 + row_number() over(partition by locationId order by date desc)) day
) NewDate
from t
)nd on t.id = nd.id
set t.Newdate = nd.NewDate;
See DB<>Fiddle example

How to calculate median monthly from date of month table?

My dataset:
Date Num_orders
Mar 21 2019 69
Mar 22 2019 82
Mar 24 2019 312
Mar 25 2019 199
Mar 26 2019 2,629
Mar 27 2019 2,819
Mar 28 2019 3,123
Mar 29 2019 3,332
Mar 30 2019 1,863
Mar 31 2019 1,097
Apr 01 2019 1,578
Apr 02 2019 2,353
Apr 03 2019 2,768
Apr 04 2019 2,648
Apr 05 2019 3,192
Apr 06 2019 2,363
Apr 07 2019 1,578
Apr 08 2019 3,090
Apr 09 2019 3,814
Apr 10 2019 3,836
...
I need to calculate the monthly median number of orders from days of the same month:
The desired results:
Month Median_monthly
Mar 2019 1,863
Apr 2019 2,768
May 2019 2,876
Jun 2019 ...
...
I tried to use function date_trunc to extract month from the dataset then group by 'month' but it didn't work out. Thanks for your help, I use Google Bigquery (#standard) environment!
Probably you tried to use PERCENTILE_CONT which can not be used with GROUP BY:
Try to use APPROX_QUANTILES(x, 100)[OFFSET(50)]. It should work with GROUP BY.
SELECT APPROX_QUANTILES](Num_orders, 100)\[OFFSET(50)\] AS median
FROM myTable
GROUP BY Month
Alternativele you can use PERCENTILE_CONT within subquery:
SELECT
DISTINCT Month, median
FROM (
SELECT
Month,
PERCENTILE_CONT(Num_orders, 0.5) OVER(PARTITION BY Month) AS median
FROM myTable
)
This would often be done using DISTINCT:
SELECT DISTINCT DATE_TRUNC(month, date),
PERCENTILE_CONT(Num_orders, 0.5) OVER (PARTITION BY DATE_TRUNC(month, date) AS median
FROM myTable;
Note: There are two percentile functions, PERCENTILE_CONT() and PERCENTILE_DISC(). They have different results when there is a "tie" in the middle of the data.

Adding set lists of future dates to rows in a SQL query

So I am doing a cohort analysis for customers, where a cohort is a group of people who started using the product in the same month. I then keep track of each cohort's total use for every subsequent month up till present time.
For example, the first "cohort month" is January 2012, then I have "use months" January 12, Feb 12, March 12, ..., March 17(current month). One column is "cohort month", and another is "use month". This process repeats for every subsequent cohort month. The table looks like:
Jan 12 | Jan 12
Jan 12 | Feb 12
...
Jan 12 | Mar 17
Feb 12 | Feb 12
Feb 12 | Mar 12
...
Feb 12 | Mar 17
...
Feb 17 | Feb 17
Feb 17 | Mar 17
Mar 17 | Mar 17
The problem arises because I want to do forecasting for one year out for both existing and future cohorts.
That means for the Jan 12 cohort, I want to do prediction for April 17 to Mar 18.
I also want to do predictions for the April 17 cohort (which doesn't exist yet) from April 17 to Mar 18. And so on till predictions for the Mar 18 cohort in Mar 18.
I can handle the predictions, don't worry about that.
My issue is that I cannot figure out how to add in this list of (April 17 .. Mar 17) in the "use month" column before every cohort switches.
I also need to add in cohorts April 17 to Mar 18, and have the applicable parts of this list of (April 17 ... Mar 17) for each of these future cohorts.
So I want the table to look like:
Jan 12 | Jan 12
Jan 12 | Feb 12
...
Jan 12 | Mar 17
Jan 12 | Apr 17
..
Jan 12 | Mar 18
Feb 12 | Feb 12
Feb 12 | Mar 12
...
Feb 12 | Mar 17
Feb 12 | Apr 17
...
Feb 12 | Mar 18
...
...
Feb 17 | Feb 17
Feb 17 | Mar 17
...
Feb 17 | Mar 18
Mar 17 | Mar 17
...
Mar 17 | Mar 18
I know the first solution to come to mind is to do a create a list of all dates Jan 12 to Mar 18, cross join it to itself, and then left outer join to the current table I have (where cohort / use months range from Jan 12 to Mar 17). However, this is not scalable.
Is there a way I can just iteratively add in this list of the months of the next year?
I am using HP Vertica, could use Presto or Hive if absolutely necessary
I think you should use the query here below to create a temporary table out of nothing, and join it with the rest of your query. You can't do anything in a procedural manner in SQL, I'm afraid. You won't be able to get away without a CROSS JOIN. But here, you limit the CROSS JOIN to the generation of the first-of-month pairs that you need.
Here goes:
WITH
-- create a list of integers from 0 to 100 using the TIMESERIES clause
i(i) AS (
SELECT dt::DATE - '2000-01-01'::DATE
FROM (
SELECT '2000-01-01'::DATE + 0
UNION ALL SELECT '2000-01-01'::DATE + 100
) d(d)
TIMESERIES dt AS '1 day' OVER(ORDER BY d::TIMESTAMP)
)
,
-- limits are Jan-2012 to the first of the current month plus one year
month_limits(month_limit) AS (
SELECT '2012-01-01'::DATE
UNION ALL SELECT ADD_MONTHS(TRUNC(CURRENT_DATE,'MONTH'),12)
)
-- create the list of possible months as a CROSS JOIN of the i table
-- containing the integers and the month_limits table, using ADD_MONTHS()
-- and the smallest and greatest month of the month limits
,month_list AS (
SELECT
ADD_MONTHS(MIN(month_limit),i) AS month_first
FROM month_limits CROSS JOIN i
GROUP BY i
HAVING ADD_MONTHS(MIN(month_limit),i) <= (
SELECT MAX(month_limit) FROM month_limits
)
)
-- finally, CROSS JOIN the obtained month list with itself with the
-- filters needed.
SELECT
cohort.month_first AS cohort_month
, use.month_first AS use_month
FROM month_list AS cohort
CROSS JOIN month_list AS use
WHERE use.month_first >= cohort.month_first
ORDER BY 1,2
;

Best way to store aggregated values

We need to store aggregated values for different accounts which summarise various numbers on Month/Year basis. These numbers would be updated each time the data is updated (usually once or twice every 24 hours).
I'm expecting the data to be the results of PIVOT functions e.g.:
Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2011 0 0 0 0 0 0 95 33 34 24 36 52
Each account will need different aggregates e.g. "Count Of Customers", "Count Of Orders" and "Value Of Sales" and I'm not sure whether it would be best to add a key to the data or use separate tables e.g.:
Year Key Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2011 CntOrders 0 0 0 0 0 0 95 33 34 24 36 52
2011 CntCust 0 0 0 0 0 0 95 33 34 24 36 52
2011 ValOrders 0 0 0 0 0 0 95 33 34 24 36 52
Or
dbo.CountOfOrders
Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2011 0 0 0 0 0 0 95 33 34 24 36 52
dbo.ValueOfOrders
Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2011 0 0 0 0 0 0 95 33 34 24 36 52
I've read a number of posts suggesting both NoSQL and SQL Server so I'm not sure which way we should go or how to decide.
We can't justify a dedicated cube at the moment but I'm wondering if it would be better to store the values in a NoSQL database or whether we should stick with SQL Server?
I'll stick with SQL. However, if you are worried about the time to rebuild such PIVOT table, don't, because you don't have to necessarily build a table with unique "key".
Build it with key + process datetime and just append it to the main pivot. So during creation of the incrementals it will be bounded by your transaction timestamp (begin and end). There should be much bloat. If there is, you can collapse the process dates in a weekend job.
Set up a job to run stored procedures that insert data into tables.
Store the data like Account,Year,Month,Value
Use views of these tables for reporting multiple aggregates.
Definitely stick with SQL. There is no reason to add technical overhead for such a simple task.