Sampling for a SQL Database - sql

I have a column "steak" representing the amount of steak in pounds my firm has bought since day 1 of 2010.
I have another column "c_steak" representing the cumulative sum of pounds of steak.
╔═══╦════════════╦═════════════╗
║ ║ steak ║ c_steak ║
╠═══╬════════════╬═════════════╣
║ 1 ║ 0.2 ║ 0.2 ║
║ 2 ║ 0.2 ║ 0.4 ║
║ 3 ║ 0.3 ║ 0.7 ║
╚═══╩════════════╩═════════════╝
How do I sample the table such that a row is taken once we buy another 100 pounds of steak? (sample ONE row immediately after c_steak reaches 100, 200, 300, 400 etc).
Note(EDIT):
c_steak is float. It may not exactly hit 100, 200, 300....
If c_steak goes like ..., 99.5, 105.3, 107.1, ... then the row corresponding to 105.3 will be sampled.
if c_steak goes like ..., 99, 100.1, 100.2, 100.3, 105..., then the row corresponding to 100.1 will be sampled.

It almost certain you need LAG method. You can try like:
SELECT *
FROM (
SELECT c_steak
,lag(c_steak, 1, 0) OVER (ORDER BY id) lg
FROM myTable
) sub
WHERE cast(sub.c_steak as int) %100 - cast(sub.lg as int)% 100 < 0
The logic is that when you reach a sum of 100, 200 etc, the difference in modulus with the previous value should be negative.
e.g:
80%100 = 80 where as 101%100 = 1
195%100 = 95 where as 205%100 = 5
293%100 = 93 where as 320%100 = 20
etc

This works:
SELECT m2.id,m2.steak,m2.c_steak FROM t1 as m1 inner join t1 as m2 on m2.id = m1.id + 1 WHERE cast(m2.c_steak as int) % 100 < cast(m1.c_steak as int) % 100;
Look here:
DEMO
===========
EDIT (in case id column skips at all):
SELECT distinct m2.id,m2.steak,m2.c_steak FROM t1 as m1 inner join t1 as m2 on m2.id > m1.id WHERE cast(m2.c_steak as int) % 100 < cast(m1.c_steak as int) % 100;
DEMO
===========

You can use the MOD() function. Docs: https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_mod
SELECT * FROM Table WHERE MOD(c_steak, 100) = 0;
EDIT:
In response to OPs edit, you can use FLOOR() on c_steak to get an int. Docs: https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_floor
SELECT * FROM Table WHERE MOD(FLOOR(c_steak), 100) = 0;

You could have supplied some sample data.
Doing it now :
WITH
-- sample data , this will be in the table
input(id,steak_sold) AS (
SELECT 1,30.07
UNION ALL SELECT 2,30.01
UNION ALL SELECT 3,30.02
UNION ALL SELECT 4,30.03
UNION ALL SELECT 5,30.04
UNION ALL SELECT 6,30.05
UNION ALL SELECT 7,30.06
UNION ALL SELECT 8,30.07
UNION ALL SELECT 9,30.08
UNION ALL SELECT 10,30.09
UNION ALL SELECT 11,30.10
UNION ALL SELECT 12,30.11
UNION ALL SELECT 13,30.12
UNION ALL SELECT 14,30.13
UNION ALL SELECT 15,30.14
UNION ALL SELECT 16,30.15
UNION ALL SELECT 17,30.16
)
-- real WITH clause would begin here: creating running sum myself ....
,
runsum AS (
SELECT
*
, SUM(steak_sold) OVER(ORDER BY id) AS c_steak
FROM input
)
SELECT
*
FROM runsum
-- this running sum is above a certain 100
-- previous (running sum - steak_sold) below that 100
-- integer division by 100 of the two differs
WHERE c_steak//100 <> (c_steak - steak_sold) //100;
-- out id | steak_sold | c_steak
-- out ----+------------+---------
-- out 4 | 30.03 | 120.13
-- out 7 | 30.06 | 210.28
-- out 10 | 30.09 | 300.52
-- out 14 | 30.13 | 420.98
-- out 17 | 30.16 | 511.43
-- out (5 rows)
-- out
-- out Time: First fetch (5 rows): 53.018 ms. All rows formatted: 53.066 ms

Related

sql pivot multiple and partially similar row values into multiple unknown number of cols

To SELECT multiple CASE-WHEN expressions into a single row per ID I have used aggregation MAX and GROUPBY.
SELECT table1.IDvar,
MAX(CASE WHEN table2.var1 = 'foo' THEN table2.var2 END) AS condition1,
MAX(CASE WHEN table2.var1 = 'bar' THEN table2.var2 END) AS condition2
FROM table1
FULL JOIN table2 ON table1.IDvar = table2.table1_IDvar
GROUP BY table1.IDvar
However, I have observed that empirical criteria such as foo entered in the CASE-WHEN-THEN-END expression may occur multiple times, that is, in multiple rows each of which corresponds to different values on columns of interest (THEN column-of-interest END) in the db schema. This implies that taking the MAX or MIN drops data that may be of interest. It is not known in advance how many rows there are for each value in criteria_col and thus in the cols_of_interest.
Sample data e.g.:
IDvar_foreign_key
criteria_col
col_of_interest1
col_of_interest2
x1
foo
01-01-2021
100
x1
foo
01-06-2021
2000
x1
foo
01-08-2021
0
x1
bar
01-08-2021
300
Note: the actual table does contain a unique identifier or primary key.
Q: Are there ways to pivot certain columns/tables in a db schema without possibly dropping some values?
An ouput something like this:
IDvar_foreign_key
foo_1_col_of_interest1
foo_1_col_of_interest2
foo_2_col_of_interest1
foo_2_col_of_interest2
foo_3_col_of_interest1
foo_3_col_of_interest2
x1
01-01-2021
100
01-06-2021
2000
01-08-2021
0
Edit
#lemon and #MTO suggests dynamic queries are necesarry, otherwise I was considering whether not using aggregation would do
Dynamic Pivot in Oracle's SQL
Pivot rows to columns without aggregate
TSQL Pivot without aggregate function
You can use the MIN and MAX aggregation functions and to get the correlated minimums and maximums for col_of_interest2 you can use KEEP (DENSE_RANK ...):
SELECT t1.IDvar,
MIN(CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END)
AS foo_1_col_of_interest1,
MIN(col_of_interest2) KEEP (
DENSE_RANK FIRST ORDER BY
CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END
ASC NULLS LAST
) AS foo_1_col_of_interest2,
MAX(CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END)
AS foo_2_col_of_interest1,
MAX(col_of_interest2) KEEP (
DENSE_RANK FIRST ORDER BY
CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END
DESC NULLS LAST
) AS foo_2_col_of_interest2
FROM table1 t1
FULL JOIN table2 t2
ON t1.IDvar = t2.table1_IDvar
GROUP BY t1.IDvar
Which, for the sample data:
CREATE TABLE table1 ( idvar ) AS
SELECT 1 FROM DUAL UNION ALL
SELECT 2 FROM DUAL UNION ALL
SELECT 3 FROM DUAL;
CREATE TABLE table2 ( table1_idvar, criteria_col, col_of_interest1, col_of_interest2 ) AS
SELECT 1, 'foo', DATE '2021-01-01', 100 FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-03-01', 500 FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 1, 'bar', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-01-02', 200 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-03-02', 300 FROM DUAL UNION ALL
SELECT 2, 'bar', DATE '2021-06-02', 400 FROM DUAL UNION ALL
SELECT 3, 'foo', DATE '2021-01-03', 700 FROM DUAL;
Outputs:
IDVAR
FOO_1_COL_OF_INTEREST1
FOO_1_COL_OF_INTEREST2
FOO_2_COL_OF_INTEREST1
FOO_2_COL_OF_INTEREST2
1
2021-01-01 00:00:00
100
2021-06-01 00:00:00
2000
2
2021-01-02 00:00:00
200
2021-03-02 00:00:00
300
3
2021-01-03 00:00:00
700
2021-01-03 00:00:00
700
SQL (not just Oracle) requires each query to have a known, fixed number of columns; if you want a dynamic number of columns then you should perform the pivot in whatever third-party application (Java, C#, PHP, etc.) that you are using to talk to the database.
If you want to pivot a fixed maximum number of columns then you can use the ROW_NUMBER analytic function. For example, if you want the 3 minimum values for col_of_interest1 then you can use:
SELECT idvar,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 1 THEN col_of_interest1 END)
AS foo_1_col_of_interest1,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 1 THEN col_of_interest2 END)
AS foo_1_col_of_interest2,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 2 THEN col_of_interest1 END)
AS foo_2_col_of_interest1,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 2 THEN col_of_interest2 END)
AS foo_2_col_of_interest2,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 3 THEN col_of_interest1 END)
AS foo_3_col_of_interest1,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 3 THEN col_of_interest2 END)
AS foo_3_col_of_interest2
FROM (
SELECT t1.IDvar,
criteria_col,
col_of_interest1,
col_of_interest2,
ROW_NUMBER() OVER (
PARTITION BY t1.IDvar, criteria_col
ORDER BY col_of_interest1, col_of_interest2
) AS rn
FROM table1 t1
FULL JOIN table2 t2
ON t1.IDvar = t2.table1_IDvar
WHERE criteria_col IN ('foo' /*, 'bar', 'etc'*/)
)
GROUP BY idvar
Which outputs:
IDVAR
FOO_1_COL_OF_INTEREST1
FOO_1_COL_OF_INTEREST2
FOO_2_COL_OF_INTEREST1
FOO_2_COL_OF_INTEREST2
FOO_3_COL_OF_INTEREST1
FOO_3_COL_OF_INTEREST2
1
2021-01-01 00:00:00
100
2021-03-01 00:00:00
500
2021-06-01 00:00:00
2000
2
2021-01-02 00:00:00
200
2021-03-02 00:00:00
300
null
null
3
2021-01-03 00:00:00
700
null
null
null
null
fiddle
Unknown number of columns is an issue that you can't ignore. The question is - are there any possible expected limits. I question who would get any meaningfull insight from a resulting dataset with hundreds of columns. It doesnt make sense. If you could setup the limit to 10 or 20 or whatever like that then you could build a datagrid structure using pivot where the number of columns would be the same and the data within could be placed as in your question.
Just as an example how - here is the code that does it with up to 6 pairs of your data of interest (COL_DATE and COL_VALUE) - it could be 20 or 30 or ...
First your sample data and some preparation for pivoting (CTE named grid):
WITH -- S a m p l e d a t a
tbl AS
(
SELECT 1 "ID", 'foo' "CRITERIA", DATE '2021-01-01' "INTEREST_1", 100 "INTEREST_2" FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-03-01', 500 FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 1, 'bar', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-01-02', 200 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-03-02', 300 FROM DUAL UNION ALL
SELECT 2, 'bar', DATE '2021-06-02', 400 FROM DUAL UNION ALL
SELECT 3, 'foo', DATE '2021-01-03', 700 FROM DUAL
),
grid AS
(SELECT * FROM
( Select ID "ID", CRITERIA "GRP", INTEREST_1 "COL_DATE", INTEREST_2 "COL_VALUE",
Count(*) OVER(Partition By ID, CRITERIA) "ROWS_TOT",
ROW_NUMBER() OVER(Partition By ID, CRITERIA Order By ID, CRITERIA) "RN_GRP_ID",
ROW_NUMBER() OVER(Partition By ID, CRITERIA Order By ID, CRITERIA) "RN_GRP_ID_2"
From tbl t )
ORDER BY ID ASC, GRP DESC, ROWS_TOT DESC
),
Result (grid)
ID GRP COL_DATE COL_VALUE ROWS_TOT RN_GRP_ID RN_GRP_ID_2
---------- --- --------- ---------- ---------- ---------- -----------
1 foo 01-JAN-21 100 3 3 3
1 foo 01-JUN-21 2000 3 2 2
1 foo 01-MAR-21 500 3 1 1
1 bar 01-JUN-21 2000 1 1 1
2 foo 02-MAR-21 300 2 2 2
2 foo 02-JAN-21 200 2 1 1
2 bar 02-JUN-21 400 1 1 1
3 foo 03-JAN-21 700 1 1 1
... next is pivoting (another CTE named grid_pivot) and designing another grid that will be populated with your data of interest...
grid_pivot AS
( SELECT
ID, GRP, ROWS_TOT,
MAX(GRP_1_LINK) "GRP_1_LINK", CAST(Null as DATE) "GRP_1_DATE", CAST(Null as NUMBER) "GRP_1_VALUE",
MAX(GRP_2_LINK) "GRP_2_LINK", CAST(Null as DATE) "GRP_2_DATE", CAST(Null as NUMBER) "GRP_2_VALUE",
MAX(GRP_3_LINK) "GRP_3_LINK", CAST(Null as DATE) "GRP_3_DATE", CAST(Null as NUMBER) "GRP_3_VALUE",
MAX(GRP_4_LINK) "GRP_4_LINK", CAST(Null as DATE) "GRP_4_DATE", CAST(Null as NUMBER) "GRP_4_VALUE",
MAX(GRP_5_LINK) "GRP_5_LINK", CAST(Null as DATE) "GRP_5_DATE", CAST(Null as NUMBER) "GRP_5_VALUE",
MAX(GRP_6_LINK) "GRP_6_LINK", CAST(Null as DATE) "GRP_6_DATE", CAST(Null as NUMBER) "GRP_6_VALUE"
-- ... ... ... ...
FROM
( Select *
From ( Select * From grid )
PIVOT ( Max(RN_GRP_ID) "LINK" --Min(RN_GRP_ID) "GRP_FROM",
FOR RN_GRP_ID_2 IN(1 "GRP_1", 2 "GRP_2", 3 "GRP_3", 4 "GRP_4", 5 "GRP_5", 6 "GRP_6" ) ) -- ... ...
Order By ROWS_TOT DESC, GRP DESC, ID ASC
)
GROUP BY GRP, ROWS_TOT, ID
ORDER BY ROWS_TOT DESC, GRP DESC, ID ASC
)
Result (grid_pivot)
ID GRP ROWS_TOT GRP_1_LINK GRP_1_DATE GRP_1_VALUE GRP_2_LINK GRP_2_DATE GRP_2_VALUE GRP_3_LINK GRP_3_DATE GRP_3_VALUE GRP_4_LINK GRP_4_DATE GRP_4_VALUE GRP_5_LINK GRP_5_DATE GRP_5_VALUE GRP_6_LINK GRP_6_DATE GRP_6_VALUE
---------- --- ---------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- -----------
1 foo 3 1 2 3
2 foo 2 1 2
3 foo 1 1
1 bar 1 1
2 bar 1 1
... and, finaly, mixing grid_pivot data with grid data using 6 left joins to fit 6 pairs of your data of interest into the grid.
SELECT gp.ID, gp.GRP,
g1.COL_DATE "GRP_1_DATE", g1.COL_VALUE "GRP_1_VALUE",
g2.COL_DATE "GRP_2_DATE", g2.COL_VALUE "GRP_2_VALUE",
g3.COL_DATE "GRP_3_DATE", g3.COL_VALUE "GRP_3_VALUE",
g4.COL_DATE "GRP_1_DATE", g4.COL_VALUE "GRP_4_VALUE",
g5.COL_DATE "GRP_2_DATE", g5.COL_VALUE "GRP_5_VALUE",
g6.COL_DATE "GRP_3_DATE", g6.COL_VALUE "GRP_6_VALUE"
-- ... ... ... ...
FROM grid_pivot gp
LEFT JOIN grid g1 ON(g1.ID = gp.ID And g1.GRP = gp.GRP And g1.RN_GRP_ID = gp.GRP_1_LINK)
LEFT JOIN grid g2 ON(g2.ID = gp.ID And g2.GRP = gp.GRP And g2.RN_GRP_ID = gp.GRP_2_LINK)
LEFT JOIN grid g3 ON(g3.ID = gp.ID And g3.GRP = gp.GRP And g3.RN_GRP_ID = gp.GRP_3_LINK)
LEFT JOIN grid g4 ON(g4.ID = gp.ID And g4.GRP = gp.GRP And g4.RN_GRP_ID = gp.GRP_4_LINK)
LEFT JOIN grid g5 ON(g5.ID = gp.ID And g5.GRP = gp.GRP And g5.RN_GRP_ID = gp.GRP_5_LINK)
LEFT JOIN grid g6 ON(g6.ID = gp.ID And g6.GRP = gp.GRP And g6.RN_GRP_ID = gp.GRP_6_LINK)
-- ... ... ... ...
ORDER BY gp.ROWS_TOT DESC, gp.GRP DESC, gp.ID ASC
R e s u l t :
ID GRP GRP_1_DATE GRP_1_VALUE GRP_2_DATE GRP_2_VALUE GRP_3_DATE GRP_3_VALUE GRP_1_DATE GRP_4_VALUE GRP_2_DATE GRP_5_VALUE GRP_3_DATE GRP_6_VALUE
---------- --- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- -----------
1 foo 01-MAR-21 500 01-JUN-21 2000 01-JAN-21 100
2 foo 02-JAN-21 200 02-MAR-21 300
3 foo 03-JAN-21 700
1 bar 01-JUN-21 2000
2 bar 02-JUN-21 400
Anyway you will probably need dynamic solution so, this could be interesting for something else, who knows what, when and where...

Oracle recursively calculate total base on tax

I have a temp table like this:
id d tax_rate money
1 20210101 5 100
1 20210201 15 0
1 20210301 20 0
1 20210401 5 0
This is the output I want to select:
id d tax_rate money total
1 20210101 5 100 105
1 20210201 15 105 120.75
1 20210301 20 120.75 144.9
1 20210401 5 144.9 152.145
This means that I need to recursively calculate the total based on tax_rate and previous total (in first day previous total = money).
total = previous total (by date) * (1 + tax_rate) (tax_rate in percentage)
I tried using LAG() OVER() but LAG only calculate previous, not recursively so from 3rd day the calculated return wrong total.
In my case, if I can use LAG or any function to multiple all the previous tax_rate (e.g 1.05 * 1.15 * 1.2 = 1.449) then I can calculate the right previous total, but no luck to find a function to do that.
WITH tmp AS
(
SELECT 1 AS id, 20210101 AS d, 5 AS tax_rate, 1000 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210201 AS d, 15 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210301 AS d, 20 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210401 AS d, 5 AS tax_rate, 0 AS money FROM dual
)
SELECT *
FROM tmp;
You can try to use mathematical formulas to do accumulate for multiplication.
Then calculate money by the accumulate for multiplication.
Query 1:
SELECT ID, D, tax_rate,
SUM(money) OVER(PARTITION BY ID ORDER BY ID) * EXP(SUM(LN(CAST(tax_rate AS DECIMAL(5,2))/100 + 1))over(PARTITION BY ID ORDER BY d)) total
FROM tmp
Results:
| ID | D | TAX_RATE | TOTAL |
|----|----------|----------|---------|
| 1 | 20210101 | 5 | 105 |
| 1 | 20210201 | 15 | 120.75 |
| 1 | 20210301 | 20 | 144.9 |
| 1 | 20210401 | 5 | 152.145 |
One option would be something like this
WITH tmp AS
(
SELECT 1 AS id, 20210101 AS d, 5 AS tax_rate, 100 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210201 AS d, 15 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210301 AS d, 20 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210401 AS d, 5 AS tax_rate, 0 AS money FROM dual
),
running_total( id, d, tax_rate, money, total )
as (
select id, d, tax_rate, money, money * (1 + tax_rate/100) total
from tmp
where money != 0
union all
select t.id, t.d, t.tax_rate, t.money, rt.total * (1 + t.tax_rate/100)
from tmp t
join running_total rt
on t.id = rt.id
and to_date( rt.d, 'yyyyddmm' ) = to_date( t.d, 'yyyyddmm' ) - 1
)
select *
from running_total;
See this dbfiddle.
I am assuming that the first row, which forms the base of the recursive CTE, is the row where money != 0 (so there would be only one such row per id). You could change that to pick the row with the earliest date per id or whatever other "first row" logic your actual data supports.
Note that life will be easier for you if you use actual dates for dates rather than using numbers that represent dates. For a 4 row virtual table, it won't matter much that you have to do a to_date on both sides of the join in the running_total recursive CTE. But for a real table with a decent number of rows, you'd want to be able to have an index on (id, d) to get decent performance. You could, of course, create a function-based index but then you'd either need to explicitly specify things like the NLS environment in your to_date call or deal with the potential for sessions not to use your index if their NLS environment doesn't match the NLS settings used to create the index.

How to select unoccupied ranges between two numbers

Consider I am having below table:
Id | Title | Start | End
-----+--------------+---------+-----
1 | Group A | 100 | 200
-----+--------------+---------+-----
2 | Group B | 350 | 500
-----+--------------+---------+-----
3 | Group C | 600 | 800
I want to get unoccupied ranges between 100 and 999.
my required final result would be:
Id | Start | End
-----+----------+-----
1 | 201 | 349
-----+----------+-----
2 | 501 | 599
-----+----------+-----
3 | 801 | 999
You can use lead() window function to do so.
Select Id, [End]+1 as Start, coalesce((lead(start)over(order by id) -1),999) [End]
from mytable
Since at the last row result of lead() will be null I have used coalesce() to make it 999.
Schema:
create table mytable( Id int, Title varchar(50),[Start] int , [End] int);
insert into mytable values(1, 'Group A', 100, 200);
insert into mytable values(2, 'Group B', 350, 500);
insert into mytable values(3, 'Group C', 600, 800);
Query:
Select Id, [End]+1 as [Start], coalesce((lead([start])over(order by id) -1),999) [End]
from mytable
Output:
Id
Start
End
1
201
349
2
501
599
3
801
999
db<>fiddle here
This is a tricky problem. If I make the following assumptions:
All the values are between 100 and 999.
The values have no overlaps.
Then you can handle this with lead() and union all:
select null, 100, min(starti) - 1
from t
having min(starti) > 100
union all
select title, endi + 1, next_starti - 1
from (select lead(starti, 1, 1000) over (order by starti) as next_starti, t.*
from t
) t
where next_starti >= endi + 1;
Note that the first subquery is for a condition not in your sample data, but where the first value starts after 100.
For the more general solution where you could have overlaps, the simplest method might be to general all possible values, remove the ones that exist, and then combine the adjacent values:
with n as (
select 100 as n
union all
select n + 1
from n
where n < 999
)
select min(n), max(n)
from (select n.*, row_number() over (order by n) as seqnum
from n
where not exists (select 1 from t where n.n between t.starti and t.endi)
) tn
group by (n - seqnum)
order by min(n)
option (maxrecursion 0);
Here is a db<>fiddle.

Arithmetic operation on row value

I have a table with the below data
Tid Did value
------------------
1 123 100
1 234 200
2 123 323
2 234 233
All tids have dids as 123 and 234. So for every tid having dids 123 and 234 I want to calculate value of did 123/value of did 234 * 100 i.e 100/200 * 100
For tid 2 it will be value of did 123/value of did 234 * 100 i.e 323/233 * 100
The output table will be
Tid result
------------------
1 100/200 * 100
2 323/233 * 100
Any help?
JOIN the "123" rows with the "234" rows:
select t123.tid, t123.value * 100 / t234.value
from
(select tid, value from tablename where value = 123) t123
join
(select tid, value from tablename where value = 234) t234
on t123.tid = t234.tid
JOIN, all in ON
select t123.tid, t123.value * 100 / t234.value
from tablename t123
join tablename t234 on t123.tid = t234.tid and t123.did = 123 and t234.did = 234
Here is the query. We can use inner join to achieve it.
SELECT T1.Tid,(T1.value/T2.value)*100 AS Result
FROM Table_1 AS T1
INNER JOIN
Table_1 AS T2
ON (T1.Tid = T2.Tid)
AND (T1.Did <> T2.Did)
AND T1.Did = 123
select tid,
100 * sum(case when did = 123 then value end) /
sum(case when did = 234 then value end)
from your_table
group by tid
having sum(case when did = 234 then value end) > 0

Oracle - theoretical sql query for create intervals

Is it possible to solve this situation by sql query in ORACLE?
I have a table like this:
TYPE UNIT
A 230
B 225
C 60
D 45
E 5
F 2
I need to separate units to the three(variable) 'same'(equally sized) intervals and foreach figure out the count? It means something like this:
0 - 77 -> 4
78 - 154 -> 0
155 - 230 -> 2
You can use the maximum value and a connect-by query to generate the upper and lower values for each range:
select ceil((level - 1) * int) as int_from,
floor(level * int) - 1 as int_to
from (select round(max(unit) / 3) as int from t42)
connect by level <= 3;
INT_FROM INT_TO
---------- ----------
0 76
77 153
154 230
And then do a left outer join to your original table to do the count for each range, so you get the zero value for the middle range:
with intervals as (
select ceil((level - 1) * int) as int_from,
floor(level * int) - 1 as int_to
from (select round(max(unit) / 3) as int from t42)
connect by level <= 3
)
select i.int_from || '-' || i.int_to as range,
count(t.unit)
from intervals i
left join t42 t
on t.unit between i.int_from and i.int_to
group by i.int_from, i.int_to
order by i.int_from;
RANGE COUNT(T.UNIT)
---------- -------------
0-76 4
77-153 0
154-230 2
Yes, this can be done in Oracle. The hard part is the definition of the bounds. You can use the maximum value and some arithmetic on a sequence with values of 1, 2, and 3.
After that, the rest is just a cross join and aggregation:
with bounds as (
select (case when n = 1 then 0
when n = 2 then trunc(maxu / 3)
else trunc(2 * maxu / 3)
end) as lowerbound,
(case when n = 1 then trunc(maxu / 3)
when n = 2 then trunc(2*maxu / 3)
else maxu
end) as upperbound
from (select 1 as n from dual union all select 2 from dual union all select 3 from dual
) n cross join
(select max(unit) as maxu from atable t)
)
select b.lowerbound || '-' || b.upperbound,
sum(case when units between b.lowerbound and b.upperbound then 1 else 0 end)
from atable t cross join
bounds b
group by b.lowerbound || '-' || b.upperbound;