Is there a way to subtract the value of the first selected row from all rows ? So if I have
t = 1, v = 500
t = 2, v = 800
t = 3, v = 1200
I would get
t = 1, v = 0
t = 2, v = 300
t = 3, v = 700
I'm always looking for a portable solution but a Postgres solution works just the same :-)
Thank you.
SELECT v - FIRST_VALUE(v) OVER (ORDER BY t)
FROM mytable
ORDER BY
t
Something like this may work
SELECT mt2.t, mt2.v - mt1.v AS v
FROM MyTable mt1
CROSS JOIN MyTable mt2
WHERE mt1.t = 1
Most portable way without using window functions:
select v - first
from
mytable,
(select v as first from mytable order by t limit 1) as inner
order by t
Related
So I'm trying to work through a problem thats a bit hard to explain and I can't expose any of the data I'm working with but what Im trying to get my head around is the error below when running the query below - I've renamed some of the tables / columns for sensitivity issues but the structure should be the same
"Error from Query Engine - Out of range for integer: Infinity"
WITH accounts AS (
SELECT t.user_id
FROM table_a t
WHERE t.type like '%Something%'
),
CTE AS (
SELECT
st.x_user_id,
ad.name as client_name,
sum(case when st.score_type = 'Agility' then st.score_value else 0 end) as score,
st.obs_date,
ROW_NUMBER() OVER (PARTITION BY st.x_user_id,ad.name ORDER BY st.obs_date) AS rn
FROM client_scores st
LEFT JOIN account_details ad on ad.client_id = st.x_user_id
INNER JOIN accounts on st.x_user_id = accounts.user_id
--WHERE st.x_user_id IN (101011115,101012219)
WHERE st.obs_date >= '2020-05-18'
group by 1,2,4
)
SELECT
c1.x_user_id,
c1.client_name,
c1.score,
c1.obs_date,
CAST(COALESCE (((c1.score - c2.score) * 1.0 / c2.score) * 100, 0) AS INT) AS score_diff
FROM CTE c1
LEFT JOIN CTE c2 on c1.x_user_id = c2.x_user_id and c1.client_name = c2.client_name and c1.rn = c2.rn +2
I know the query works for sure because when I get rid of the first CTE and hard code 2 id's into a where clause i commented out it returns the data I want. But I also need it to run based on the 1st CTE which has ~5k unique id's
Here is a sample output if i try with 2 id's:
Based on the above number of row returned per id I would expect it should return 5000 * 3 rows = 150000.
What could be causing the out of range for integer error?
This line is likely your problem:
CAST(COALESCE (((c1.score - c2.score) * 1.0 / c2.score) * 100, 0) AS INT) AS score_diff
When the value of c2.score is 0, 1.0/c2.score will be infinity and will not fit into an integer type that you’re trying to cast it into.
The reason it’s working for the two users in your example is that they don’t have a 0 value for c2.score.
You might be able to fix this by changing to:
CAST(COALESCE (((c1.score - c2.score) * 1.0 / NULLIF(c2.score, 0)) * 100, 0) AS INT) AS score_diff
I would like to apply the following group of SQL statements at once and union the result to get the most recent record behind mt=52355979 of various stock(idetified by 'symbol') of different trade places and market types(identified by 'c1','c2','c3','c4').
select * from t where symbol=`A,c1=25,c2=814,c3=11,c4=2, date=2020.02.05, mt<52355979 order by mt desc limit 1
select * from t where symbol=`B,c1=25,c2=814,c3=12,c4=2, date=2020.02.05, mt<52355979 order by mt desc limit 1
select * from t where symbol=`C,c1=25,c2=814,c3=12,c4=2, date=2020.02.05, mt<52354979 order by mt desc limit 1
select * from t where symbol=`A,c1=1180,c2=333,c3=3,c4=116, date=2020.02.05, mt<52355979 order by mt desc limit 1
The filter columns in where condition will not change, while the filter values may change each time. Does DolphindB offer querying methods which allow to run list query with varying input parameters?
You can define a function as follows
def bundleQuery(tbl, dt, dtColName, mt, mtColName, filterColValues, filterColNames){
cnt = filterColValues[0].size()
filterColCnt =filterColValues.size()
orderByCol = sqlCol(mtColName)
selCol = sqlCol("*")
filters = array(ANY, filterColCnt + 2)
filters[filterColCnt] = expr(sqlCol(dtColName), ==, dt)
filters[filterColCnt+1] = expr(sqlCol(mtColName), <, mt)
queries = array(ANY, cnt)
for(i in 0:cnt) {
for(j in 0:filterColCnt){
filters[j] = expr(sqlCol(filterColNames[j]), ==, filterColValues[j][i])
}
queries.append!(sql(select=selCol, from=tbl, where=filters, orderBy=orderByCol, ascOrder=false, limit=1))
}
return loop(eval, queries).unionAll(false)
}
and then use the following script
dt = 2020.02.05
dtColName = "dsl"
mt = 52355979
mtColName = "mt"
colNames = `symbol`c1`c2`c3`c4
colValues = [50982208 50982208 51180116 41774759, 25 25 25 1180, 814 814 814 333, 11 12 12 3, 2 2 2 116]
bundleQuery(t, dt, dtColName, mt, mtColName, colValues, colNames)
I have the following table and as you can see the ids are not the same. So I can't do group by. I need to count all the ones that are in sequence. Like from id 9 to 13, from id 20 to 23. How i do it?
Here's a solution with LAG and LEAD.
;WITH StackValues AS
(
SELECT
T.*,
PreviousStatus = LAG(T.Status, 1, 0) OVER (ORDER BY T.ID ASC),
NextStatus = LEAD(T.Status, 1, 0) OVER (ORDER BY T.ID ASC)
FROM
#YourTable AS T
),
ValuesToSum AS
(
SELECT
L.*,
ValueToSum = CASE
WHEN L.Status = 1 AND L.PreviousStatus = 1 AND L.NextStatus = 0 THEN 1
ELSE 0 END
FROM
StackValues AS L
)
SELECT
Total = SUM(V.ValueToSum)
FROM
ValuesToSum AS V
LAG will give you the N previous row (N = 1 for this example) while LEAD will give you the N next row (N = 1 for this example). The query generates another column (ValueToSum) based on the previous and next values and uses it's result to sum.
say I have this:
select money from somewhere
I want now another column called accumulatedMoney which is going to be = to accumulatedMoney of previous row + money of current row
Ex:
m = 2, am = 2
m = 3, am = 5
m = 3, am = 8
...
What can I do to achieve this?
Thanks
In any database, you can do this with a correlated subquery:
select t.am, t.m,
(select sum(tprev.m) from t tprev where tprev.am <= t.am) as cumsum
from t
In any database, you can also do this as a join and group by:
select t.am, t.m, sum(tprev.m) as cumsum
from t join
t tprev
on tprev.am <= t.am
group by t.am, t.m
In databases that support cumulative sums, you can do it as:
select t.am, t.m,
sum(t.m) over (order by t.am) as cumsum
from t
(SQL Server 2012 and Oracle support this.)
I have a table with following values:
week_no amt amt_diff
1 500 100
2 600 300
3 900 100
4 1000 null
When I subtract week2.amt-week1.amt the difference is getting saved in the amt_diff column of week_no=1. But I want the result to be stored with the week_no=2 record.
Can anyone help me with the SQL?
I think this should work. You can make it a SELECT first to make sure you get the desired results. The syntax is valid in SQL Server, not sure about other RDBMS.
UPDATE m2
SET amt_diff = (m2.amt-m1.amt)
FROM MyTable m2
INNER JOIN MyTable m1
ON m1.week_no = (M2.week_no - 1)
It will update all records that have week after it to be calculated.
To just select the values:
SELECT amt_diff = (m2.amt-m1.amt)
FROM MyTable m2
INNER JOIN MyTable m1
ON m1.week_no = (M2.week_no - 1)
UPDATE YOURTABLE T
SET T.AMT_DIFF = ( T.AMT - NVL(( SELECT TT.AMT
FROM YOURTABLE TT
WHERE TT.WEEK_NO = (T.WEEK_NO - 1)
)
,0)
)
WHERE T.WEEK_NO = 2;
Might work for you.
update k
set k.amt_diff=(select k2.amt from week k2 where week_no=k.week_no+1)-amt
from week k