it might be a silly question but I am struggling with postgres update. I have following table:
id | tableX_id| position |
---+----------+---------+
1 | 10 | |
2 | 10 | |
3 | 10 | |
4 | 10 | |
5 | 10 | |
6 | 11 | |
7 | 11 | |
8 | 12 | |
I need to update position like this:
id | tableX_id| position |
---+----------+---------+
1 | 10 | 1 |
2 | 10 | 2 |
3 | 10 | 3 |
4 | 10 | 4 |
5 | 10 | 5 |
6 | 11 | 1 |
7 | 11 | 2 |
8 | 12 | 1 |
I have following update that doesnt work(update all position to 1):
UPDATE tableY y
SET position = subquery.pos
FROM (
SELECT ROW_NUMBER() OVER() as pos
FROM tableY y2
JOIN tableX x on x.id = y2.tableX_id
) as subquery
add where subquery.id = tableY.id, as below:
t=# update x set position = pos
from (select *,ROW_NUMBER() OVER(partition by x order by id) as pos FROM x) sub
where x.id = sub.id;
UPDATE 8
Time: 10.015 ms
t=# select * from x;
id | x | position
----+----+----------
1 | 10 | 1
2 | 10 | 2
3 | 10 | 3
4 | 10 | 4
5 | 10 | 5
6 | 11 | 1
7 | 11 | 2
8 | 12 | 1
(8 rows)
Related
I am working on the problem where I have to get the count of streak with max value, but to get the exact result I have to count that point as well where the streak breaks. My table looks like this
+-----------------+--------+-------+
| customer_number | Months | Flags |
+-----------------+--------+-------+
| 1 | 12 | 1 |
| 1 | 1 | 1 |
| 1 | 2 | 1 |
| 1 | 3 | 1 |
| 1 | 4 | 1 |
| 1 | 5 | 1 |
| 1 | 8 | 1 |
| 1 | 9 | 1 |
| 1 | 10 | 1 |
| 1 | 11 | 1 |
| 6 | 12 | 1 |
| 6 | 1 | 1 |
| 6 | 2 | 1 |
| 6 | 3 | 1 |
| 6 | 4 | 1 |
| 6 | 5 | 4 |
| 6 | 9 | 1 |
| 6 | 10 | 1 |
| 6 | 11 | 1 |
| 7 | 5 | 1 |
| 8 | 9 | 1 |
| 8 | 10 | 1 |
| 8 | 11 | 1 |
| 9 | 9 | 1 |
| 9 | 10 | 1 |
| 9 | 11 | 1 |
| 10 | 11 | 1 |
+-----------------+--------+-------+
and my desired output is
+----------+--------------------+
| Customer | Consecutive streak |
+----------+--------------------+
| 1 | 10 |
| 6 | 6 |
| 7 | 1 |
| 8 | 3 |
| 9 | 3 |
| 10 | 1 |
+----------+--------------------+
the code I have
SELECT customer_number, max(streak) max_consecutive_streak FROM (
SELECT customer_number, COUNT(*) as streak
FROM
(select *,
(row_number() over (order by customer_number) -
row_number() over (order by customer_number)
) as counts
from table1
) cc
group by customer_number, counts
)
GROUP BY 1;
It is working good but for customer_number 6 it returns 5 but I want it to be 6, means it should count 4 as well in its longest streak as the streak breaks at this point. Any idea how can I achieve that?
You can use a cte with row_number:
with cte(r, id, flag) as (
select row_number() over (order by c.customer_number), c.* from customers c
),
freq(id, t, f) as (
select c2.id, c2.f, count(*) from
(select c.id, (select sum(c1.flag!=c.flag) from cte c1 where c1.id=c.id and c1.r <= c.r) f from cte c)
c2 group by c2.id, c2.f
)
select id, max(f) from freq group by id;
In the query below, I don't get the results i would expect. Any insights why? How could i reformulate such query to get the desired results?
Schema (SQLite v3.30)
WITH RECURSIVE
cnt(x,y) AS (VALUES(0,ABS(Random()%3)) UNION ALL SELECT x+1, ABS(Random()%3) FROM cnt WHERE x<10),
i_rnd as (SELECT r1.x, r1.y, (SELECT COUNT(*) FROM cnt as r2 WHERE r2.y<=r1.y) as idx FROM cnt as r1)
SELECT * FROM i_rnd ORDER BY y;
result:
| x | y | idx |
| --- | --- | --- |
| 1 | 0 | 3 |
| 5 | 0 | 6 |
| 8 | 0 | 5 |
| 9 | 0 | 4 |
| 10 | 0 | 2 |
| 3 | 1 | 4 |
| 0 | 2 | 11 |
| 2 | 2 | 11 |
| 4 | 2 | 11 |
| 6 | 2 | 11 |
| 7 | 2 | 11 |
expected result:
| x | y | idx |
| --- | --- | --- |
| 1 | 0 | 5 |
| 5 | 0 | 5 |
| 8 | 0 | 5 |
| 9 | 0 | 5 |
| 10 | 0 | 5 |
| 3 | 1 | 6 |
| 0 | 2 | 11 |
| 2 | 2 | 11 |
| 4 | 2 | 11 |
| 6 | 2 | 11 |
| 7 | 2 | 11 |
In other words, idx should indicate how many rows have y less or equal than the y of row considered.
I would just use:
select cnt.*,
count(*) over (order by y)
from cnt;
Here is a db<>fiddle.
The issue with your code is probably that the CTE is re-evaluated each time it is called, so the values are not consistent -- a problem with volatile functions in CTEs.
I have a table representing trade exchanges between cities and I'd like to add an id that would indicate groups of same origin/destination and destination/origin alike.
For example:
| origin | destination
|--------|------------
| 8 | 2
| 2 | 8
| 8 | 2
| 8 | 5
| 8 | 5
| 9 | 1
| 1 | 9
would become:
| id | origin | destination
|----|--------|------------
| 0 | 8 | 2
| 0 | 2 | 8
| 0 | 8 | 2
| 1 | 8 | 5
| 1 | 8 | 5
| 2 | 9 | 1
| 2 | 1 | 9
I can have same origin/destination but I can also have origin/destination = destination/origin and I want all of those groups identified.
One way: with the window function dense_rank() and GREATEST / LEAST:
SELECT dense_rank() OVER (ORDER BY GREATEST(origin, destination)
, LEAST (origin, destination)) - 1 AS id
, origin, destination
FROM trade;
db<>fiddle here
- 1 to start with 0 like your example.
I have a Postgres table like this:
id | value
----+-------
1 | 100
2 | 100
3 | 100
4 | 100
5 | 200
6 | 200
7 | 200
8 | 100
9 | 100
10 | 300
I'd have a table like this
id | value |new_id
----+---------+-----
1 | 100 | 1
2 | 100 | 1
3 | 100 | 1
4 | 100 | 1
5 | 200 | 2
6 | 200 | 2
7 | 200 | 2
8 | 100 | 3
9 | 100 | 3
10 | 300 | 4
I'd have a new field with a new_id that change when value change and remain the same until value changes again.
My question is similar this but I cannot found a solution.
You can identify sequences where the value is the same by using a difference of row_number(). After getting the difference, you have a group identifier and can calculate the minimum id for each group. Then, dense_rank() will renumber the values based on this ordering.
It looks like this:
select t.id, t.value, dense_rank() over (order by minid) as new_id
from (select t.*, min(id) over (partition by value, grp) as minid
from (select t.*,
(row_number() over (order by id) - row_number() over (partition by value order by id)
) as grp
from table t
) t
) t
You can see what happens to your sample data:
id | value | grp | minid | new_id |
----+-------+-----+-------+--------+
1 | 100 | 0 | 1 | 1 |
2 | 100 | 0 | 1 | 1 |
3 | 100 | 0 | 1 | 1 |
4 | 100 | 0 | 1 | 1 |
5 | 200 | 4 | 5 | 2 |
6 | 200 | 4 | 5 | 2 |
7 | 200 | 4 | 5 | 2 |
8 | 100 | 3 | 8 | 3 |
9 | 100 | 3 | 8 | 3 |
10 | 300 | 9 | 10 | 4 |
I have this table...
+-----+--------+------+-----+-----+
|categ| nAME | quan |IDUNQ| ID|
+-----+--------+------+-----+-----+
| 1 | Z | 3 | 1 | 15 |
| 1 | A | 3 | 2 | 16 |
| 1 | B | 3 | 3 | 17 |
| 2 | Z | 2 | 4 | 15 |
| 2 | A | 2 | 5 | 16 |
| 3 | Z | 1 | 6 | 15 |
| 3 | B | 1 | 7 | 17 |
| 2 | Z | 1 | 8 | 15 |
| 2 | C | 4 | 8 | 15 |
| 1 | D | 1 | 8 | 15 |
+-----+--------+------+-----+-----+
I need to get the Z of category 1 + Z of category 2 - Z of category 3
For example, (3+3-1) = 5 ==> 3 of cat 1, 3 of cat 2, 1 of cat 3
The final result should be...
Z ==> 5
A ==> 5
B ==> 2
C ==> 4
Note: I'm assuming the data for "C" from your example was mistakenly omitted.
SELECT nAME, SUM(CASE categ WHEN 3 THEN 0-quan ELSE quan END) AS quan
FROM theTable
GROUP BY nAME
SQL Fiddle
SELECT name, SUM(quan) AS sum
FROM tableName
GROUP BY name, categ
This should work.