select Data for only week days and calculate difference - sql

I am working on an sql query to calculate MTBF
I have following set of data
+-----+-------------------------+------+
| ID | DateTime | Sec |
+-----+-------------------------+------+
| 101 | 2019-07-22 09:10:10.000 | 900 |
| 100 | 2019-07-22 08:45:00.000 | 900 |
| 99 | 2019-07-22 08:30:00.000 | 800 |
| 98 | 2019-07-22 08:15:00.000 | 800 |
| 97 | 2019-07-22 07:10:10.000 | 600 |
| 96 | 2019-07-22 06:50:00.000 | 600 |
| 95 | 2019-07-22 06:40:00.000 | 400 |
| 94 | 2019-07-21 15:40:00.000 | 720 |
| 93 | 2019-07-21 13:25:00.000 | 400 |
| 92 | 2019-07-21 10:43:10.000 | 900 |
| 91 | 2019-07-20 07:30:00.000 | 800 |
| 90 | 2019-07-19 20:40:10.000 | 900 |
| 89 | 2019-07-19 18:30:30.000 | 700 |
| 88 | 2019-07-19 17:50:00.000 | 400 |
| 87 | 2019-07-19 16:40:00.000 | 400 |
| 86 | 2019-07-19 15:20:25.000 | 1000 |
| 85 | 2019-07-19 14:50:20.000 | 900 |
| 84 | 2019-07-19 14:30:00.000 | 8000 |
| 83 | 2019-07-19 14:10:10.000 | 600 |
| 82 | 2019-07-19 13:59:00.000 | 200 |
| 81 | 2019-07-19 13:50:40.000 | 300 |
| 80 | 2019-07-19 13:40:00.000 | 400 |
+-----+-------------------------+------+
I want to calculate the difference between the ID 101 and 100, and than between 100 and 99 and so on.
But here is difficult part
I don't want to calculate the difference between for weekend dates in this case for a date 20-07-2019 and 21-07-2019.
I always want to calculate the difference only for week days.
so for given Sample data the output has to be following.
+-----+-------------------------+------+---------+
| ID | DateTime | Sec | Diff |
+-----+-------------------------+------+---------+
| 101 | 2019-07-22 09:10:10.000 | 900 | Null |
| 100 | 2019-07-22 08:45:00.000 | 900 | 0:25:10 |
| 99 | 2019-07-22 08:30:00.000 | 800 | 0:15:00 |
| 98 | 2019-07-22 08:15:00.000 | 800 | 0:15:00 |
| 97 | 2019-07-22 07:10:10.000 | 600 | 1:04:50 |
| 96 | 2019-07-22 06:50:00.000 | 600 | 0:20:10 |
| 95 | 2019-07-22 06:40:00.000 | 400 | 0:10:00 |
| 94 | 2019-07-21 15:40:00.000 | 720 | Null |
| 93 | 2019-07-21 13:25:00.000 | 400 | Null |
| 92 | 2019-07-21 10:43:10.000 | 900 | Null |
| 91 | 2019-07-20 07:30:00.000 | 800 | Null |
| 90 | 2019-07-19 20:40:10.000 | 900 | Null |
| 89 | 2019-07-19 18:30:30.000 | 700 | 2:09:40 |
| 88 | 2019-07-19 17:50:00.000 | 400 | 0:40:30 |
| 87 | 2019-07-19 16:40:00.000 | 400 | 1:10:00 |
| 86 | 2019-07-19 15:20:25.000 | 1000 | 1:19:35 |
| 85 | 2019-07-19 14:50:20.000 | 900 | 0:30:05 |
| 84 | 2019-07-19 14:30:00.000 | 8000 | 0:20:20 |
| 83 | 2019-07-19 14:10:10.000 | 600 | 0:19:50 |
| 82 | 2019-07-19 13:59:00.000 | 200 | 0:11:10 |
| 81 | 2019-07-19 13:50:40.000 | 300 | 0:08:20 |
| 80 | 2019-07-19 13:40:00.000 | 400 | 0:10:40 |
+-----+-------------------------+------+---------+
after that i wan to sum all the difference and divide by number (count) of id in week days.
Below is the query i have tried until now
SELECT *, DATEDIFF( SECOND, DateTime, LEAD(DateTime) OVER (ORDER BY [ID])) AS [diff] FROM [Stoerungen] where [DateTime] between '20190719 00:00:00.000' and '20190722 23:59:00.000' and ((DATEPART(dw, DateTime) + ##DATEFIRST) % 7) NOT IN (0, 1) order by [ID] OFFSET 0 ROWS
I am able to exclude the weekend data but this query makes a difference from last Friday to next Monday so I have wrong data.

As you don't want exclude non-week days but only set Diff to null, move this condition to CASE expression
SELECT *
, (case When (((DATEPART(dw, DateTime) + ##DATEFIRST) % 7) NOT IN (0, 1))
then DATEDIFF( SECOND, DateTime, LEAD(DateTime) OVER (ORDER BY [ID]))
else null
end) AS [diff]
FROM [Stoerungen]
WHERE [DateTime] between '20190719 00:00:00.000' and '20190722 23:59:00.000'
ORDER BY [ID]
OFFSET 0 ROWS

Related

How to calculate various sum columns based on value of another in SQL?

Question: Write a query, which will output the user count today, as well as from 7 (uc7), 14 (uc14), 30 (uc30) days ago
Table: num_users
+------------+------------+
| dateid | user_count |
+------------+------------+
| 2014-12-31 | 1010 |
| 2014-12-30 | 1000 |
| 2014-12-29 | 990 |
| 2014-12-28 | 980 |
| 2014-12-27 | 970 |
| 2014-12-26 | 960 |
| 2014-12-25 | 950 |
| 2014-12-24 | 940 |
| 2014-12-23 | 930 |
| 2014-12-22 | 920 |
| 2014-12-21 | 910 |
| 2014-12-20 | 900 |
| 2014-12-19 | 890 |
| 2014-12-18 | 880 |
| 2014-12-17 | 870 |
| 2014-12-16 | 860 |
| 2014-12-15 | 850 |
| 2014-12-14 | 840 |
| 2014-12-13 | 830 |
| 2014-12-12 | 820 |
| 2014-12-11 | 810 |
| 2014-12-10 | 800 |
| 2014-12-09 | 790 |
| 2014-12-08 | 780 |
| 2014-12-07 | 770 |
| 2014-12-06 | 760 |
| 2014-12-05 | 750 |
| 2014-12-04 | 740 |
| 2014-12-03 | 730 |
| 2014-12-02 | 720 |
| 2014-12-01 | 710 |
+------------+------------+
Desired Output:
+------------+------+------+------+------+
| dateid | uc | uc7 | uc14 | uc30 |
+------------+------+------+------+------+
| 2014-12-31 | 1010 | 940 | 870 | 710 |
| 2014-12-30 | 1000 | 930 | 860 | 0 |
| 2014-12-29 | 990 | 920 | 850 | 0 |
| 2014-12-28 | 980 | 910 | 840 | 0 |
| 2014-12-27 | 970 | 900 | 830 | 0 |
| 2014-12-26 | 960 | 890 | 820 | 0 |
| 2014-12-25 | 950 | 880 | 810 | 0 |
| 2014-12-24 | 940 | 870 | 800 | 0 |
| 2014-12-23 | 930 | 860 | 790 | 0 |
| 2014-12-22 | 920 | 850 | 780 | 0 |
| 2014-12-21 | 910 | 840 | 770 | 0 |
| 2014-12-20 | 900 | 830 | 760 | 0 |
| 2014-12-19 | 890 | 820 | 750 | 0 |
| 2014-12-18 | 880 | 810 | 740 | 0 |
| 2014-12-17 | 870 | 800 | 730 | 0 |
| 2014-12-16 | 860 | 790 | 720 | 0 |
| 2014-12-15 | 850 | 780 | 710 | 0 |
| 2014-12-14 | 840 | 770 | 0 | 0 |
| 2014-12-13 | 830 | 760 | 0 | 0 |
| 2014-12-12 | 820 | 750 | 0 | 0 |
| 2014-12-11 | 810 | 740 | 0 | 0 |
| 2014-12-10 | 800 | 730 | 0 | 0 |
| 2014-12-09 | 790 | 720 | 0 | 0 |
| 2014-12-08 | 780 | 710 | 0 | 0 |
| 2014-12-07 | 770 | 0 | 0 | 0 |
| 2014-12-06 | 760 | 0 | 0 | 0 |
| 2014-12-05 | 750 | 0 | 0 | 0 |
| 2014-12-04 | 740 | 0 | 0 | 0 |
| 2014-12-03 | 730 | 0 | 0 | 0 |
| 2014-12-02 | 720 | 0 | 0 | 0 |
| 2014-12-01 | 710 | 0 | 0 | 0 |
+------------+------+------+------+------+
How do I properly do this?
I tried my solution as below but it does not result in the right solution
SELECT dateid AS today,
(SELECT SUM(user_count) FROM num_users WHERE dateid = dateid) AS uc,
(SELECT SUM(user_count) FROM num_users WHERE dateid - 7) AS uc7,
(SELECT SUM(user_count) FROM num_users WHERE dateid - 14) AS uc14,
(SELECT SUM(user_count) FROM num_users WHERE dateid - 14) AS uc30
FROM num_users
This produces the presented output:
SELECT num_users.dateid, num_users.user_count AS uc,
(SELECT user_count FROM num_users AS A WHERE A.dateid=num_users.dateid-7) AS uc7,
(SELECT user_count FROM num_users AS A WHERE A.dateid=num_users.dateid-14) AS uc14,
(SELECT user_count FROM num_users AS A WHERE A.dateid=num_users.dateid-30) AS uc30
FROM num_users
ORDER BY num_users.dateid DESC;
But maybe you really want:
SELECT Sum(num_users.user_count) AS uc,
Sum(IIf([dateid]<=#12/31/2014#-7,[user_count],0)) AS uc7,
Sum(IIf([dateid]<=#12/31/2014#-14,[user_count],0)) AS uc14,
Sum(IIf([dateid]<=#12/31/2014#-30,[user_count],0)) AS uc30
FROM num_users;
Above tested with Access. If data actually continues through current date, replace #12/31/2014# with Date(). Formatting literal date and function will most likely be different in another database platform.

Average by day using timestamp

I have the following mariadb table. The data is added 3 times per day. I am looking to write a SQL query that would give me the average amount for the day. This way I can say on May 13 'serender' averaged x amt, 'shilta' averaged x amt and 'snowq' averaged x amt. On May 14th the averages were... and so on for each date.
key | timestamp | card | amt |
-------------------------------------------
| 126 | 1620837006 | serender | 8040 |
| 127 | 1620837006 | shilta | 752 |
| 128 | 1620837006 | snowq | 308 |
| 132 | 1620862207 | serender | 846 |
| 133 | 1620862207 | shilta | 803 |
| 134 | 1620862207 | snowq | 759 |
| 139 | 1620894616 | serender | 845 |
| 140 | 1620894616 | shilta | 805 |
| 141 | 1620894616 | snowq | 759 |
| 146 | 1620923404 | serender | 869 |
| 147 | 1620923404 | shilta | 804 |
| 148 | 1620923404 | snowq | 759 |
| 153 | 1620948607 | serender | 755 |
| 154 | 1620948607 | shilta | 650 |
| 155 | 1620948607 | snowq | 530 |
If you want to see the date then convert it from a Unix timestamp to a date:
select date(from_unixtime(timstamp)) as dte, card, avg(amt)
from t
group by dte, card;

How do you create a moving window calculation in SQL for a time series? (not averages)

I am working with data where I need to use a calculation that is supposed to reference a previously calculated value in the previous row.
For example, take this dataset:
SELECT
generate_series('2015-01-01', '2019-12-01', '1 month'::interval)::date AS dates,
generate_series(1,60) AS nums;
There's are NULL values starting at 2019-03-01.
I'd like to write a calculation on another column that fills it in based off the previous row, that is derived from that same calculation. So I tried to use some lag() functions. But after a while it turns to NULL, probably because the previous calculation is also null.
with
mynumbers AS (
SELECT
generate_series('2015-01-01', '2025-12-01', '1 month'::interval)::date AS dates,
generate_series(1,50) AS nums),
mynumbers_lag AS (
SELECT *, lag(nums) OVER (ORDER BY dates ASC) AS previous1
FROM mynumbers)
SELECT dates, nums, previous1, (coalesce(nums,previous1)+lag(coalesce(nums,previous1), 5) OVER (ORDER BY dates ASC))*4 AS moving_calculation FROM mynumbers_lag;
The result starts to deviate from what I'd like it to be at 2019-03-01. I'd like my calculation to continue all the way through the table. Anyone know how I can accomplish this?
Edit: borrowing unutbu's table.. I want to yield this:
| dates | nums | previous1 | moving_calculation |
|------------+------+-----------+--------------------|
| 2015-01-01 | 1 | | |
| 2015-02-01 | 2 | 1 | |
| 2015-03-01 | 3 | 2 | |
| 2015-04-01 | 4 | 3 | |
| 2015-05-01 | 5 | 4 | |
| 2015-06-01 | 6 | 5 | 28 |
| 2015-07-01 | 7 | 6 | 36 |
| 2015-08-01 | 8 | 7 | 44 |
| 2015-09-01 | 9 | 8 | 52 |
| 2015-10-01 | 10 | 9 | 60 |
...
| 2018-12-01 | 50 | 49 | 364 |
| 2019-01-01 | 50 | 49 | 372 |
| 2019-02-01 | 50 | 49 | 380 |
| 2019-03-01 | 50 | 49 | 388 |
| 2019-04-01 | 50 | 49 | 1744 |
| 2019-05-01 | 50 | 49 | 7172 |
| 2019-06-01 | | | 28888 |
| 2019-07-01 | | | 117104 |
| 2019-08-01 | | | 475392 |
| 2019-09-01 | | | 1930256 |
On 2019-04-01, the 1744 is calculated from (388+48)*4. The 388 is one
cell from the previously calculated value because nums is NULL. Eventually,
starting on 2018-07-01, both nums are NULL, so it will calculating using
only from moving_calculations (values 380 and 7172)
The values in the moving_calculation column (denoted m0 below) depend on
prior values in the same column. They are defined by a recurrence
relation. There might even be a closed-form formula for m0. You might want to
ask a question on Mathematics stackexchange if you wish to find a closed-form
formula. If we knew the closed-form formula, clearly computing values in
Postgresql would be a breeze.
However, if we regard this problem as a programming problem, then I believe the calculation -- if it is to be done in Postgresql -- is most easily expressed using WITH RECURSIVE.
The calculation sort of feels like the calculation of Fibonacci numbers.
WITH RECURSIVE r(a, b) AS (
SELECT 0::int, 1::int
UNION ALL
SELECT b, a + b FROM r WHERE b < 50
)
SELECT a, b FROM r;
yields
| a | b |
|----+----|
| 0 | 1 |
| 1 | 1 |
| 1 | 2 |
| 2 | 3 |
| 3 | 5 |
| 5 | 8 |
| 8 | 13 |
| 13 | 21 |
| 21 | 34 |
| 34 | 55 |
If you understand the use of WITH RECURSIVE in that Fibonacci example, then I believe you'll see the solution below
is merely an extension of the same idea.
WITH RECURSIVE r(dates, nums, prev, m0, m1, m2, m3) AS (
SELECT * FROM (VALUES ('2019-02-01'::date, 50::numeric, 47::numeric, 388::numeric, NULL::numeric, NULL::numeric, NULL::numeric)) AS t1
UNION ALL
SELECT (dates + '1 month'::interval)::date
, m0
, coalesce(m3, prev+1)
, (m0+coalesce(m3, prev+1))*4
, m0
, m1
, m2
FROM r
WHERE dates <= '2020-01-01'
)
SELECT * FROM r
yields
| dates | nums | prev | m0 | m1 | m2 | m3 |
|------------+------------+----------+------------+------------+-----------+-----------|
| 2019-02-01 | 50 | 47 | 388 | | | |
| 2019-03-01 | 388 | 48 | 1744 | 388 | | |
| 2019-04-01 | 1744 | 49 | 7172 | 1744 | 388 | |
| 2019-05-01 | 7172 | 50 | 28888 | 7172 | 1744 | 388 |
| 2019-06-01 | 28888 | 388 | 117104 | 28888 | 7172 | 1744 |
| 2019-07-01 | 117104 | 1744 | 475392 | 117104 | 28888 | 7172 |
| 2019-08-01 | 475392 | 7172 | 1930256 | 475392 | 117104 | 28888 |
| 2019-09-01 | 1930256 | 28888 | 7836576 | 1930256 | 475392 | 117104 |
| 2019-10-01 | 7836576 | 117104 | 31814720 | 7836576 | 1930256 | 475392 |
| 2019-11-01 | 31814720 | 475392 | 129160448 | 31814720 | 7836576 | 1930256 |
| 2019-12-01 | 129160448 | 1930256 | 524362816 | 129160448 | 31814720 | 7836576 |
| 2020-01-01 | 524362816 | 7836576 | 2128797568 | 524362816 | 129160448 | 31814720 |
| 2020-02-01 | 2128797568 | 31814720 | 8642449152 | 2128797568 | 524362816 | 129160448 |
To combine this table with the original table, use UNION:
WITH mytable AS (
SELECT *, (nums+prev)*4 AS m0, NULL::numeric AS m1, NULL::numeric AS m2, NULL::numeric AS m3
FROM (
SELECT *
, lag(nums, 3) OVER (ORDER BY dates ASC) AS prev
FROM (
SELECT
generate_series('2015-01-01', '2025-12-01', '1 month'::interval)::date AS dates,
generate_series(1,50)::numeric AS nums) t
) t2
WHERE nums IS NOT NULL
), last_row AS (
SELECT * FROM mytable
WHERE nums IS NOT NULL
ORDER BY dates DESC
LIMIT 1
)
SELECT * FROM mytable
UNION (
WITH RECURSIVE r(dates, nums, prev, m0, m1, m2, m3) AS (
SELECT * FROM last_row
UNION ALL
SELECT (dates + '1 month'::interval)::date
, m0
, coalesce(m3, prev+1)
, (m0+coalesce(m3, prev+1))*4
, m0
, m1
, m2
FROM r
WHERE dates <= '2020-01-01')
SELECT * FROM r)
ORDER BY dates
which yields
| dates | nums | prev | m0 | m1 | m2 | m3 |
|------------+------------+----------+------------+------------+-----------+-----------|
| 2015-01-01 | 1 | | | | | |
| 2015-02-01 | 2 | | | | | |
| 2015-03-01 | 3 | | | | | |
| 2015-04-01 | 4 | 1 | 20 | | | |
| 2015-05-01 | 5 | 2 | 28 | | | |
| 2015-06-01 | 6 | 3 | 36 | | | |
| 2015-07-01 | 7 | 4 | 44 | | | |
| 2015-08-01 | 8 | 5 | 52 | | | |
| 2015-09-01 | 9 | 6 | 60 | | | |
| 2015-10-01 | 10 | 7 | 68 | | | |
| 2015-11-01 | 11 | 8 | 76 | | | |
| 2015-12-01 | 12 | 9 | 84 | | | |
| 2016-01-01 | 13 | 10 | 92 | | | |
| 2016-02-01 | 14 | 11 | 100 | | | |
| 2016-03-01 | 15 | 12 | 108 | | | |
| 2016-04-01 | 16 | 13 | 116 | | | |
| 2016-05-01 | 17 | 14 | 124 | | | |
| 2016-06-01 | 18 | 15 | 132 | | | |
| 2016-07-01 | 19 | 16 | 140 | | | |
| 2016-08-01 | 20 | 17 | 148 | | | |
| 2016-09-01 | 21 | 18 | 156 | | | |
| 2016-10-01 | 22 | 19 | 164 | | | |
| 2016-11-01 | 23 | 20 | 172 | | | |
| 2016-12-01 | 24 | 21 | 180 | | | |
| 2017-01-01 | 25 | 22 | 188 | | | |
| 2017-02-01 | 26 | 23 | 196 | | | |
| 2017-03-01 | 27 | 24 | 204 | | | |
| 2017-04-01 | 28 | 25 | 212 | | | |
| 2017-05-01 | 29 | 26 | 220 | | | |
| 2017-06-01 | 30 | 27 | 228 | | | |
| 2017-07-01 | 31 | 28 | 236 | | | |
| 2017-08-01 | 32 | 29 | 244 | | | |
| 2017-09-01 | 33 | 30 | 252 | | | |
| 2017-10-01 | 34 | 31 | 260 | | | |
| 2017-11-01 | 35 | 32 | 268 | | | |
| 2017-12-01 | 36 | 33 | 276 | | | |
| 2018-01-01 | 37 | 34 | 284 | | | |
| 2018-02-01 | 38 | 35 | 292 | | | |
| 2018-03-01 | 39 | 36 | 300 | | | |
| 2018-04-01 | 40 | 37 | 308 | | | |
| 2018-05-01 | 41 | 38 | 316 | | | |
| 2018-06-01 | 42 | 39 | 324 | | | |
| 2018-07-01 | 43 | 40 | 332 | | | |
| 2018-08-01 | 44 | 41 | 340 | | | |
| 2018-09-01 | 45 | 42 | 348 | | | |
| 2018-10-01 | 46 | 43 | 356 | | | |
| 2018-11-01 | 47 | 44 | 364 | | | |
| 2018-12-01 | 48 | 45 | 372 | | | |
| 2019-01-01 | 49 | 46 | 380 | | | |
| 2019-02-01 | 50 | 47 | 388 | | | |
| 2019-03-01 | 388 | 48 | 1744 | 388 | | |
| 2019-04-01 | 1744 | 49 | 7172 | 1744 | 388 | |
| 2019-05-01 | 7172 | 50 | 28888 | 7172 | 1744 | 388 |
| 2019-06-01 | 28888 | 388 | 117104 | 28888 | 7172 | 1744 |
| 2019-07-01 | 117104 | 1744 | 475392 | 117104 | 28888 | 7172 |
| 2019-08-01 | 475392 | 7172 | 1930256 | 475392 | 117104 | 28888 |
| 2019-09-01 | 1930256 | 28888 | 7836576 | 1930256 | 475392 | 117104 |
| 2019-10-01 | 7836576 | 117104 | 31814720 | 7836576 | 1930256 | 475392 |
| 2019-11-01 | 31814720 | 475392 | 129160448 | 31814720 | 7836576 | 1930256 |
| 2019-12-01 | 129160448 | 1930256 | 524362816 | 129160448 | 31814720 | 7836576 |
| 2020-01-01 | 524362816 | 7836576 | 2128797568 | 524362816 | 129160448 | 31814720 |
| 2020-02-01 | 2128797568 | 31814720 | 8642449152 | 2128797568 | 524362816 | 129160448 |

How to update column with average weekly value for each day in sql

I have the following table. I insert a column named WeekValue, I want to fill the weekvalue column with the weekly average value of impressionCnt of the same category for each row.
Like:
+-------------------------+----------+---------------+--------------+
| Date | category | impressioncnt | weekAverage |
+-------------------------+----------+---------------+--------------+
| 2014-02-06 00:00:00.000 | a | 123 | 100 |
| 2014-02-06 00:00:00.000 | b | 121 | 200 |
| 2014-02-06 00:00:00.000 | c | 99 | 300 |
| 2014-02-07 00:00:00.000 | a | 33 | 100 |
| 2014-02-07 00:00:00.000 | b | 456 | 200 |
| 2014-02-07 00:00:00.000 | c | 54 | 300 |
| 2014-02-08 00:00:00.000 | a | 765 | 100 |
| 2014-02-08 00:00:00.000 | b | 78 | 200 |
| 2014-02-08 00:00:00.000 | c | 12 | 300 |
| ..... | | | |
| 2014-03-01 00:00:00.000 | a | 123 | 111 |
| 2014-03-01 00:00:00.000 | b | 121 | 222 |
| 2014-03-01 00:00:00.000 | c | 99 | 333 |
| 2014-03-02 00:00:00.000 | a | 33 | 111 |
| 2014-03-02 00:00:00.000 | b | 456 | 222 |
| 2014-03-02 00:00:00.000 | c | 54 | 333 |
| 2014-03-03 00:00:00.000 | a | 765 | 111 |
| 2014-03-03 00:00:00.000 | b | 78 | 222 |
| 2014-03-03 00:00:00.000 | c | 12 | 333 |
+-------------------------+----------+---------------+--------------+
I tried
update [dbo].[RetailTS]
set Week = datepart(day, dateDiff(day, 0, [Date])/7 *7)/7 +1
To get the week numbers then try to group by the week week number and date and category, but this seems isn't correct. How do I write the SQL query? Thanks!
Given that you may be adding more data in the future, thus requiring another update, you might want to just select out the weekly averages:
SELECT
Date,
category,
impressioncnt,
AVG(impressioncnt) OVER
(PARTITION BY category, DATEDIFF(d, 0, Date) / 7) AS weekAverage
FROM RetailTS
ORDER BY
Date, category;

How to SUM from MySQL for every n record

I have a following result from query:
+---------------+------+------+------+------+------+------+------+-------+
| order_main_id | S36 | S37 | S38 | S39 | S40 | S41 | S42 | total |
+---------------+------+------+------+------+------+------+------+-------+
| 26 | 127 | 247 | 335 | 333 | 223 | 111 | 18 | 1394 |
| 26 | 323 | 606 | 772 | 765 | 573 | 312 | 154 | 3505 |
| 38 | 25 | 35 | 35 | 35 | 20 | NULL | NULL | 150 |
| 38 | 25 | 35 | 35 | 35 | 20 | NULL | NULL | 150 |
| 39 | 65 | 86 | 86 | 42 | 21 | NULL | NULL | 300 |
| 39 | 42 | 58 | 58 | 28 | 14 | NULL | NULL | 200 |
| 35 | 11 | 20 | 21 | 18 | 9 | 2 | NULL | 81 |
| 35 | 10 | 25 | 30 | 23 | 12 | 1 | NULL | 101 |
+---------------+------+------+------+------+------+------+------+-------+
I would like to insert a SUM before enter different order_main_id, it would be like this result:
+---------------+------+------+------+------+------+------+------+-------+
| order_main_id | S36 | S37 | S38 | S39 | S40 | S41 | S42 | total |
+---------------+------+------+------+------+------+------+------+-------+
| 26 | 127 | 247 | 335 | 333 | 223 | 111 | 18 | 1394 |
| 26 | 323 | 606 | 772 | 765 | 573 | 312 | 154 | 3505 |
| | 450 | 853 | 1107 | 1098 | 796 | 423 | 172 | 4899 |
| 38 | 25 | 35 | 35 | 35 | 20 | NULL | NULL | 150 |
| 38 | 25 | 35 | 35 | 35 | 20 | NULL | NULL | 150 |
| | 50 | 70 | 70 | 70 | 40 | NULL | NULL | 300 |
| 39 | 65 | 86 | 86 | 42 | 21 | NULL | NULL | 300 |
| 39 | 42 | 58 | 58 | 28 | 14 | NULL | NULL | 200 |
| | 107 | 144 | 144 | 70 | 35 | NULL | NULL | 500 |
| 35 | 11 | 20 | 21 | 18 | 9 | 2 | NULL | 81 |
| 35 | 10 | 25 | 30 | 23 | 12 | 1 | NULL | 101 |
| | 21 | 45 | 51 | 41 | 21 | 3 | NULL | 182 |
+---------------+------+------+------+------+------+------+------+-------+
How to make this possible ?
You'll need to write a second Query which makes use of GROUP BY order_main_id.
Something like:
SELECT sum(S41+...) FROM yourTable GROUP BY orderMainId
K
You can actually do this in one query, but with a union all (really two queries, but the result sets are combined to make one awesome result set):
select
order_main_id,
S36,
S37,
S38,
S39,
S40,
S41,
S42,
S36 + S37 + S38 + S39 + S40 + S41 + S42 as total,
'Detail' as rowtype
from
tblA
union all
select
order_main_id,
sum(S36),
sum(S37),
sum(S38),
sum(S39),
sum(S40),
sum(S41),
sum(S42),
sum(S36 + S37 + S38 + S39 + S40 + S41 + S42),
'Summary' as rowtype
from
tblA
group by
order_main_id
order by
order_main_id, RowType
Remember that the order by affects the entirety of the union all, not just the last query. So, your resultset would look like this:
+---------------+------+------+------+------+------+------+------+-------+---------+
| order_main_id | S36 | S37 | S38 | S39 | S40 | S41 | S42 | total | rowtype |
+---------------+------+------+------+------+------+------+------+-------+---------+
| 26 | 127 | 247 | 335 | 333 | 223 | 111 | 18 | 1394 | Detail |
| 26 | 323 | 606 | 772 | 765 | 573 | 312 | 154 | 3505 | Detail |
| 26 | 450 | 853 | 1107 | 1098 | 796 | 423 | 172 | 4899 | Summary |
| 35 | 11 | 20 | 21 | 18 | 9 | 2 | NULL | 81 | Detail |
| 35 | 10 | 25 | 30 | 23 | 12 | 1 | NULL | 101 | Detail |
| 35 | 21 | 45 | 51 | 41 | 21 | 3 | NULL | 182 | Summary |
| 38 | 25 | 35 | 35 | 35 | 20 | NULL | NULL | 150 | Detail |
| 38 | 25 | 35 | 35 | 35 | 20 | NULL | NULL | 150 | Detail |
| 38 | 50 | 70 | 70 | 70 | 40 | NULL | NULL | 300 | Summary |
| 39 | 65 | 86 | 86 | 42 | 21 | NULL | NULL | 300 | Detail |
| 39 | 42 | 58 | 58 | 28 | 14 | NULL | NULL | 200 | Detail |
| 39 | 107 | 144 | 144 | 70 | 35 | NULL | NULL | 500 | Summary |
+---------------+------+------+------+------+------+------+------+-------+---------+
This way, you know what is and what isn't a detail or summary row, and the order_main_id that it's for. You could always (and probably should) hide this column in your presentation layer.
For things like these I think you should use a reporting library(such as Crystal Reports), it'll save you a lot of trouble, check JasperReports and similar projects on osalt