how to get max of values by row and colmun in sql - sql

Table has values
10 20 30
40 50 60
70 80 90
we need to show data maximum of row-wise and column-wise.
Result should be
RowWiseMax, ColumnWiseMax
30, 70
60, 80
90, 90

Here's a solution to the puzzle as presented, I've had to make the assumption that the number of columns is not dynamic, this solution works using T-Sql, it's not clear what your database platform is but should hopefully port if different. Using a table named "T" with columns c1, c2, c3
select RowWiseMax,
case Row_Number() over (order by RowWiseMax)
when 1 then Max(c1) over()
when 2 then Max(c2) over()
when 3 then Max(c3) over()
end ColumnWiseMax
from T
cross apply (
select Max(v) RowWiseMax
from (values (c1), (c2), (c3))v(v)
)x

Related

SQL : How to aggregate values of one column depending on conditions applied on 2 other columns?

ID COL1 COL2 DATE DATE2 RES
P4579841254 10 20 01/02/1900 01/04/1914 10
P4579841254 20 25 01/03/1907 02/08/1918 57
P4579841254 30 31 01/04/1914 03/12/1922 459
P4579841254 70 71 01/05/1921 05/04/1927 7895
P4579841254 70 71 01/06/1921 05/06/1927 2497
P4579841254 71 20 01/06/1928 06/08/1931 1256
P4579841254 20 75 01/07/1935 07/12/1935 325987
Hello Comm', I want to calculate the sum of the col[RES] by taking as following conditions:
COL1 >= 70 and DATE1 >= min(DATE1)
and
COL2 <= 75 and DATE2 <= max(DATE2)
with,
min(DATE1) taken when COL1=70
and
max(DATE2) taken when COL2=75
In other words I want :
"if we have multiple COL1=70, we take only the one that gets the MIN(DATE) and for the COL2=75 we take the MAX(DATE) of all the rows showing COL2=75, once we have DTmin and DTmax we take all the values of RES column within this interval [DTmin;DTmax] and we sum"
For this ID, the results should be 335138 (sum of row4, row6 and row7)
I tried something with following lines, but it becomes complex for me and my SQL level (for now) when I have to insert those SELECT in others upstream SELECT that enables grouping finally by ID :(
(SELECT "DT_MIN" FROM
(SELECT "ID",MIN("DATE1") as DT_MIN
FROM "MY_TABLE"
GROUP BY "ID","DATE1","COL1","COL2"
HAVING ("COL1"='70')
)) as "DT_MIN_vf",
(SELECT "DT_MAX" FROM
(SELECT "ID",MAX("DATE2") as DT_MAX
FROM "MY_TABLE"
GROUP BY "ID","DATE2","COL1","COL2"
HAVING ("COL2"='75')
))as "DT_MAX_vf"
Need your help Specialists !
This gives you your expected result, but I'm not sure I'm fully understanding the requirements. Please take a look at it and let me know if/what I'm misunderstanding.
WITH
min_d AS
(SELECT min(m2.date1) d FROM mytable m2 WHERE m2.col1 = 70),
max_d AS
(SELECT max(m2.date2) d FROM mytable m2 WHERE m2.col2 = 75)
SELECT id, SUM(res)
FROM mytable
WHERE
date1 >= (SELECT d FROM min_d)
AND (col1 <> 70 OR date1 = (SELECT d FROM min_d)) -- Handles the special case to only include the col1=70 row where date1=min_d, but unclear if it needs to be more general
AND date2 <= (SELECT d FROM max_d)
GROUP BY id
The two CTEs at the top get the min and max dates you want. This is just to avoid duplicating them in the query, but there's no reason they have to be CTEs.
I have the special case for (col1 <> 70 OR ...) to ensure it only includes the col1=70 case where date1=min_d. I'm not at all confident I am understanding this rule in general, but this does give you the result you want.
You'll note that COL1 >= 70 and COL2 <= 75 do not appear anywhere here, despite you mentioning them in your question. You've said that the min(date1) should be calculated for col1=70 (not >=) and you've said that row 7 should be included despite COL1 being < 70, so I'm not sure where COL1 >= 70 and COL2 <= 75 is relevant.

SQL Get closest value to a number

I need to find the closet value of each number in column Divide from the column Quantity and put the value found in the Value column for both Quantities.
Example:
In the column Divide the value of 5166 would be closest to Quantity column value 5000. To keep from using those two values more than once I need to place the value of 5000 in the value column for both numbers, like the example below. Also, is it possible to do this without a loop?
Quantity Divide Rank Value
15500 5166 5 5000
1250 416 5 0
5000 1666 5 5000
12500 4166 4 0
164250 54750 3 0
5250 1750 3 0
6250 2083 3 0
12250 4083 3 0
1750 583 2 0
17000 5666 2 0
2500 833 2 0
11500 3833 2 0
1250 416 1 0
There are a couple of answers here but they both use ctes/complex subqueries. There is a much simpler/faster way by just doing a couple of self joins and a group-by
https://www.db-fiddle.com/f/rM268EYMWuK7yQT3gwSbGE/0
select
min(min.quantity) as minQuantityOverDivide
, t1.divide
, max(max.quantity) as maxQuantityUnderDivide
, case
when
(abs(t1.divide - coalesce(min(min.quantity),0))
<
abs(t1.divide - coalesce(max(max.quantity),0)))
then max(max.quantity)
else min(min.quantity) end as cloestQuantity
from t1
left join (select quantity from t1) min on min.quantity >= t1.divide
left join (select quantity from t1) max on max.quantity < t1.divide
group by
t1.divide
If I understood the requirements, 5166 is not closest to 5000 - it's closes to 5250 (delta of 166 vs 84)
The corresponding query, without loops, shall be (fiddle here: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=be434e67ba73addba119894a98657f17).
(I added a Value_Rank as it's not sure if you want Rank to be kept or recomputed)
select
Quantity, Divide, Rank, Value,
dense_rank() over(order by Value) as Value_Rank
from
(
select
Quantity, Divide, Rank,
--
case
when abs(Quantity_let_delta) < abs(Quantity_get_delta) then Divide + Quantity_let_delta
else Divide + Quantity_get_delta
end as Value
from
(
select
so.Quantity, so.Divide, so.Rank,
-- There is no LessEqualThan, assume GreaterEqualThan
max(isnull(so_let.Quantity, so_get.Quantity)) - so.Divide as Quantity_let_delta,
-- There is no GreaterEqualThan, assume LessEqualThan
min(isnull(so_get.Quantity, so_let.Quantity)) - so.Divide as Quantity_get_delta
from
SO so
left outer join SO so_let
on so_let.Quantity <= so.Divide
--
left outer join SO so_get
on so_get.Quantity >= so.Divide
group by so.Quantity, so.Divide, so.Rank
) so
) result
Or, if by closest you mean the previous closest (fiddle here: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=b41fb1a3fc11039c7f82926f8816e270).
select
Quantity, Divide, Rank, Value,
dense_rank() over(order by Value) as Value_Rank
from
(
select
so.Quantity, so.Divide, so.Rank,
-- There is no LessEqualThan, assume 0
max(isnull(so_let.Quantity, 0)) as Value
from
SO so
left outer join SO so_let
on so_let.Quantity <= so.Divide
group by so.Quantity, so.Divide, so.Rank
) result
You don't need a loop, basically you need to find which is lowest difference between the divide and all the quantities (first cte). Then use this distance to find the corresponding record (second cte) and then join with your initial table to get the converted values (final select)
;with cte as (
select t.Divide, min(abs(t2.Quantity-t.Divide)) as ClosestQuantity
from #t1 as t
cross apply #t1 as t2
group by t.Divide
)
,cte2 as (
select distinct
t.Divide, t2.Quantity
from #t1 as t
cross apply #t1 as t2
where abs(t2.Quantity-t.Divide) = (select ClosestQuantity from cte as c where c.Divide = t.Divide)
)
select t.Quantity, cte2.Quantity as Divide, t.Rank, t.Value
from #t1 as t
left outer join cte2 on t.Divide = cte2.Divide

Finding average of data within a certain range

How do I find the average of data set within a certain range? Specifically I am looking to find the average for a data set for all data points that are within one standard deviations of the original average. Here is an example:
Student_ID Test_Scores
1 3
1 20
1 30
1 40
1 50
1 60
1 95
Average = 42.571
Standard Deviation = 29.854
I want to find all data points that are within one standard deviation of this original average, so within the range (42.571-29.854)<=Data<=(42.571+29.854). And from here I want to recalculate a new average.
So my desired data set is:
Student_ID Test_Scores
1 20
1 30
1 40
1 50
1 60
My desired new average is: 40
Here is my following SQL code and it didn't yield my desired result:
SELECT
Student_ID,
AVG(Test_Scores)
FROM
Student_Data
WHERE
Test_Scores BETWEEN (AVG(Test_Scores)-STDEV(Test_Scores)) AND (AVG(Test_Scores)+STDEV(Test_Scores))
ORDER BY
Student_ID
Anyone know how I could fix this?
Use either window functions or do the calculation in a subquery:
SELECT sd.Student_ID, sd.Test_Scores
FROM Student_Data sd CROSS JOIN
(SELECT AVG(Test_Scores) as avgts, STDEV(Test_Scores) as stdts
FROM Student_Data
) x
WHERE sd.Test_Scores BETWEEN avgts - stdts AND avgts + stdts
ORDER BY sd.Student_ID;
select avg(
select test_scores from table where
test_scores between
(
(select avg(test_scores) from table)-(select stddev(test_scores) from table))
and
(
(select avg(test_scores) from table)+(select stddev(test_scores) from table))
);

SQL group table by "leading rows" without pl/sql

I have this table (short example) with two columns
1 a
2 a
3 a3
4 a
5 a
6 a6
7 a
8 a8
9 a
and I would like to group/partition them into groups separated by those leading "a", ideally to add another column like this, so I can address those groups easily.
1 a 0
2 a 0
3 a3 3
4 a 3
5 a 3
6 a6 6
7 a 6
8 a8 8
9 a 8
problem is that setup of the table is dynamic so I can't use staticly lag or lead functions, any ideas how to do this without pl/sql in postgres version 9.5
Assuming the leading part is a single character. Hence the expression right(data, -1) works to extract the group name. Adapt to your actual prefix.
The solution uses two window functions, which can't be nested. So we need a subquery or a CTE.
SELECT id, data
, COALESCE(first_value(grp) OVER (PARTITION BY grp_nr ORDER BY id), '0') AS grp
FROM (
SELECT *, NULLIF(right(data, -1), '') AS grp
, count(NULLIF(right(data, -1), '')) OVER (ORDER BY id) AS grp_nr
FROM tbl
) sub;
Produces your desired result exactly.
NULLIF(right(data, -1), '') to get the effective group name or NULL if none.
count() only counts non-null values, so we get a higher count for every new group in the subquery.
In the outer query, we take the first grp value per grp_nr as group name and default to '0' with COALESCE for the first group without name (which has a NULL as group name so far).
We could use min() or max() as outer window function as well, since there is only one non-null value per partition anyway. first_value() is probably cheapest since the rows are sorted already.
Note the group name grp is data type text. You may want to cast to integer, if those are clean (and reliably) integer numbers.
This can be achieved by setting rows containing a to a specific value and all the other rows to a different value. Then use a cumulative sum to get the desired number for the rows. The group number is set to the next number when a new value in the val column is encountered and all the proceeding rows with a will have the same group number as the one before and this continues.
I assume that you would need a distinct number for each group and the number doesn't matter.
select id, val, sum(ex) over(order by id) cm_sum
from (select t.*
,case when val = 'a' then 0 else 1 end ex
from t) x
The result for the query above with the data in question, would be
id val cm_sum
--------------
1 a 0
2 a 0
3 a3 1
4 a 1
5 a 1
6 a6 2
7 a 2
8 a8 3
9 a 3
With the given data, you can use a cumulative max:
select . . .,
coalesce(max(substr(col2, 2)) over (order by col1), 0)
If you don't strictly want the maximum, then it gets a bit more difficult. The ANSI solution is to use the IGNORE NULLs option on LAG(). However, Postgres does not (yet) support that. An alternative is:
select . . ., coalesce(substr(reft.col2, 2), 0)
from (select . . .,
max(case when col2 like 'a_%' then col1 end) over (order by col1) as ref_col1
from t
) tt join
t reft
on tt.ref_col1 = reft.col1
You can also try this :
with mytable as (select split_part(t,' ',1)::integer id,split_part(t,' ',2) myvalue
from (select unnest(string_to_array($$1 a;2 a;3 a3;4 a;5 a;6 a6;7 a;8 a8;9 a$$,
';'))t) a)
select id,myvalue,myresult from mytable join (
select COALESCE(NULLIF(substr(myvalue,2),''),'0') myresult,idmin id_down
,COALESCE(lead(idmin) over (order by myvalue),999999999999) id_up
from (
select myvalue,min(id) idmin from mytable group by 1
) a) b
on id between id_down and id_up-1

Create a Range From n to 1 in SQL

I need to create a range number from 1 to n.
For example, the parameter is #StartingValue
#StartingValue int = 96
Then the result should be:
Number
-------------
96
95
94
93
92
ff.
1
Does anyone have an idea how to do this?
Thank you.
Use a Tally Table to generate the numbers:
DECLARE #N INT = 96
;WITH E1(N) AS( -- 10 ^ 1 = 10 rows
SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N)
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), -- 10 ^ 2 = 100 rows
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), -- 10 ^ 4 = 10,000 rows
E8(N) AS(SELECT 1 FROM E4 a CROSS JOIN E4 b), -- 10 ^ 8 = 10,000,000 rows
CteTally(N) AS(
SELECT TOP(#N) ROW_NUMBER() OVER(ORDER BY(SELECT NULL))
FROM E8
)
SELECT * FROM CteTally ORDER BY N DESC
Explanation taken from Jeff's article (linked above):
The CTE called E1 (as in 10E1 for scientific notation) is nothing more
than ten SELECT 1's returned as a single result set.
E2 does a CROSS JOIN of E1 with itself. That returns a single result
set of 10*10 or up to 100 rows. I say "up to" because if the TOP
function is 100 or less, the CTE's are "smart" enough to know that it
doesn't actually need to go any further and E4 and E8 won't even come
into play. If the TOP has a value of less than 100, not all 100 rows
that E2 is capable of making will be made. It'll always make just
enough according to the TOP function.
You can follow from there. E4 is a CROSS JOIN of E2 and will make up
to 100*100 or 10,000 rows and E8 is a CROSS JOIN of E4 which will make
more rows than most people will ever need. If you do need more, then
just add an E16 as a CROSS JOIN of E8 and change the final FROM clause
to FROM E16.
What's really amazing about this bad-boy is that is produces ZERO
READS. Absolutely none, nada, nil.
One simple method is a numbers table. For a reasonable number (up to the low thousands), you can use spt_values:
with numbers as (
select top 96 row_number() over (order by (select null)) as n
from t
)
. . .
Another method is a recursive CTE:
with numbers as (
select 96 as n
union all
select n - 1
from numbers
where num > 1
)
For larger values, you'll need to use the MAXRECURSION option.
And another way.
SELECT N.number FROM
master..spt_values N
WHERE
N.type = 'P' AND
N.number BETWEEN 1 AND 96
ORDER BY N.number DESC
More details on spt_values What is the purpose of system table master..spt_values and what are the meanings of its values?
Sequance of no's can be generated by following ways:
1. Using row_number by querying a large table and get the sequance.
2. Using system tables as you can see other people comments.
3. Using recursive CTE.
declare #maxValue int = 96
; WITH rangetest AS
(
SELECT MinValue = #maxValue
UNION ALL
SELECT MinValue = MinValue - 1
FROM rangetest
WHERE MinValue > 1
)
SELECT *
from rangetest
OPTION (MAXRECURSION 0)