Updating column according to index within group - sql

In our databases we have a table called conditions which references a table called attributes.
So it looks like this (ignoring some other columns that aren't relevant to the question)
id
attribute_id
execution_index
1
1000
1
2
1000
2
3
1000
1
4
2000
1
5
2000
2
6
2000
2
In theory the combination of attribute_id and execution_index should always be unique, but in practice they're not, and the software ends up essentially using the id to decide which comes first between two conditions with the same execution index. We want to add a uniqueness constraint to the table, but before we do that we need to update the execution indexes. So essentially we want to group them by attribute_id, order them by execution_index then id, and give them new execution indexes so that it becomes
id
attribute_id
execution_index
1
1000
1
2
1000
3
3
1000
2
4
2000
1
5
2000
2
6
2000
3
I'm not sure how to do this without just ordering by attribute_id, execution_index, id and then iterating through incrementing the execution_index by 1 each time and resetting it to be 1 whenever the attribute_id changes. (That would work but it'd be slow and someone is going to have to run this script on several dozen databases so I'd rather it didn't take more than a couple of seconds per database.)
Really I'd like to do something along the lines of
UPDATE c
SET c.execution_index = [this needs to be the index within the group somehow]
FROM condities c
GROUP BY c.attribute_id
ORDER BY c.execution_index asc, c.id asc
But I don't know how to make that actually work.

It looks like you can use an updatable CTE:
with cte as (
select *,
Row_Number() over(partition by attribute_id order by execution_index, id) new
from conditions
)
update cte set execution_index = new
I would suggest adding a new column and first updating that and checking the results are as expected.
Example Fiddle

WITH cte AS
(
SELECT
*,
ROW_NUMBER() OVER
(
PARTITION BY attribute_id
ORDER BY execution_index, id
) AS RowNum
FROM condities
)
UPDATE cte
SET execution_index = RowNum

Related

How merge new records without calculate all the records again

I have a table of cars position with millions of rows and thousand of cars_id
SQL DEMO
ID Car_ID .... other fields
1 A
2 B
3 B
4 A
5 A
I need create routes for each car. So with this query:
WITH cte as (
SELECT ID, CAR_ID,
ROW_NUMBER() OVER (PARTITION BY CAR_ID ORDER BY ID) as rn
FROM myTable
)
SELECT o.CAR_ID, o.ID, d.ID
FROM cte as o -- origin
LEFT JOIN cte as d -- destination
ON o.rn = d.rn - 1
AND o."Car_ID" = d."Car_ID"
WHERE d.ID IS NOT NULL
I insert the routes in the route_sources table
ROUTE_SOURCE_id CAR_ID ORIGIN_ID DESTINATION_ID
1 A 1 4
2 B 2 3
3 A 4 5
The problem is when enter new cars positions I need to check what routes arent already created and add it to the route_sources table.
For examples new rows
ID Car_ID
6 A
7 B
8 B
Then I only need to add the following routes:
ROUTE_SOURCE_id CAR_ID ORIGIN_ID DESTINATION_ID
4 A 5 6
5 B 3 7
6 B 7 8
I know how to do merge, note version is 9.4 so INSERT ... ON CONFLICT UPDATE (and ON CONFLICT DO NOTHING), i.e. upsert. isn't available.
But my problem is I don't want to calculate the million of routes every time just to add the new routes.
Consider the car_position table get around 6000 new records for minute.
I have think in two option:
create one insert trigger in car_positions table, and with each insert search for previous car position and create the route and insert into route_sources.
create a car_log table where I save the last ID used to create a route for each car and then the create route process will check for ID newer than those ID.
But not fan of doing a select for each insert, and the car_log idea looks too complicated. Any ideas?
http://rextester.com/NFTKN29525
Not pretty, but generally create a CTE where you look at the last destination of each car, and pair it with the new ones. In the case of 'B', where we had to enter 2 records, the last destination from the query was not updated, which is why I had to let it choose from new data if it can. This approach created a record where destination = origin, which is why I needed the CTE, from which I could later filter out what was necessary.
In order to abide by Stackoverflow rules, here is the query itself:
WITH new_routes AS (
SELECT DISTINCT
n."Car_ID",
greatest(first_value(r.destination_id) OVER (PARTITION BY r."Car_ID" ORDER BY r.destination_id DESC),
lag(n.destination_id, 1) OVER (PARTITION BY n."Car_ID" ORDER BY n.destination_id)) AS origin_id,
n.destination_id
FROM newData n
JOIN result r ON r."Car_ID" = n."Car_ID" AND r.destination_id<n.destination_id
)
INSERT INTO result ("Car_ID", origin_id, destination_id)
SELECT * FROM new_routes WHERE origin_id<>destination_id
ORDER BY destination_id;
assuming that result is your previous working table, and newData is the new data that just came in.
In case you have a new car C, you can use your previous method of creating the routes for it. Use plpgsql to control this decision.

Using GROUP BY, select ID of record in each group that has lowest ID

I am creating a file orginization system where you can add content items to multiple folders.
I am storing the data in a table that has a structure similar to the following:
ID TypeID ContentID FolderID
1 101 1001 1
2 101 1001 2
3 102 1002 3
4 103 1002 2
5 103 1002 1
6 104 1001 1
7 105 1005 2
I am trying to select the first record for each unique TypeID and ContentID pair. For the above table, I would want the results to be:
ID
1
3
4
6
7
As you can see, the pairs 101 1001 and 103 1002 were each added to two folders, yet I only want the record with the first folder they were added to.
When I try the following query, however, I only get result that have at least two entries with the same TypeID and ContentID:
select MIN(ID)
from table
group by TypeID, ContentID
results in
ID
1
4
If I change MIN(ID) to MAX(ID) I get the correct amount of results, yet I get the record with the last folder they were added to and not the first folder:
ID
2
3
5
6
7
Am I using GROUP BY or the MIN wrong? Is there another way that I can accomplish this task of selecting the first record of each TypeID ContentID pair?
MIN() and MAX() should return the same amount of rows. Changing the function should not change the number of rows returned in the query.
Is this query part of a larger query? From looking at the sample data provided, I would assume that this code is only a snippet from a larger action you are trying to do. Do you later try to join TypeID, ContentID or FolderID with the tables the IDs are referencing?
If yes, this error is likely being caused by another part of your query and not this select statement. If you are using joins or multi-level select statements, you can get different amount of results if the reference tables do not contain a record for all the foreign IDs.
Another suggestion, check to see if any of the values in your records are NULL. Although this should not affect the GROUP BY, I have sometime encountered strange behavior when dealing with NULL values.
Use ROW_NUMBER
WITH CTE AS
(SELECT ID,TypeID,ContentID,FolderID,
ROW_NUMBER() OVER (PARTITION BY TypeID,ContentID ORDER BY ID) as rn FROM t
)
SELECT ID FROM CTE WHERE rn=1
Use it with ORDER BY:
select *
from table
group by TypeID, ContentID
order by id
SQLFiddle: http://sqlfiddle.com/#!9/024016/12
Try with first ( id) instead of min(id)
select first(id)
from table
group by TypeID, ContentID
It works ?

Trouble performing Postgres group by non-ID column to get ID containing max value

I'm attempting to perform a GROUP BY on a join table table. The join table essentially looks like:
CREATE TABLE user_foos (
id SERIAL PRIMARY KEY,
user_id INT NOT NULL,
foo_id INT NOT NULL,
effective_at DATETIME NOT NULL
);
ALTER TABLE user_foos
ADD CONSTRAINT user_foos_uniqueness
UNIQUE (user_id, foo_id, effective_at);
I'd like to query this table to find all records where the effective_at is the max value for any pair of user_id, foo_id given. I've tried the following:
SELECT "user_foos"."id",
"user_foos"."user_id",
"user_foos"."foo_id",
max("user_foos"."effective_at")
FROM "user_foos"
GROUP BY "user_foos"."user_id", "user_foos"."foo_id";
Unfortunately, this results in the error:
column "user_foos.id" must appear in the GROUP BY clause or be used in an aggregate function
I understand that the problem relates to "id" not being used in an aggregate function and that the DB doesn't know what to do if it finds multiple records with differing ID's, but I know this could never happen due to my trinary primary key across those columns (user_id, foo_id, and effective_at).
To work around this, I also tried a number of other variants such as using the first_value window function on the id:
SELECT first_value("user_foos"."id"),
"user_foos"."user_id",
"user_foos"."foo_id",
max("user_foos"."effective_at")
FROM "user_foos"
GROUP BY "user_foos"."user_id", "user_foos"."foo_id";
and:
SELECT first_value("user_foos"."id")
FROM "user_foos"
GROUP BY "user_foos"."user_id", "user_foos"."foo_id"
HAVING "user_foos"."effective_at" = max("user_foos"."effective_at")
Unfortunately, these both result in a different error:
window function call requires an OVER clause
Ideally, my goal is to fetch ALL matching id's so that I can use it in a subquery to fetch the legitimate full row data from this table for matching records. Can anyone provide insight on how I can get this working?
Postgres has a very nice feature called distinct on, which can be used in this case:
SELECT DISTINCT ON (uf."user_id", uf."foo_id") uf.*
FROM "user_foos" uf
ORDER BY uf."user_id", uf."foo_id", uf."effective_at" DESC;
It returns the first row in a group, based on the values in parentheses. The order by clause needs to include these values as well as a third column for determining which is the first row in the group.
Try:
SELECT *
FROM (
SELECT t.*,
row_number() OVER( partition by user_id, foo_id ORDER BY effective_at DESC ) x
FROM user_foos t
)
WHERE x = 1
If you don't want to use a sub query based on a composite of all three keys then you need to create a "dense rank" window function field that orders subsets of id, user_id and foo_id by effective date with the rank order field. Then subquery that and take the records where rank_order=1. Since the rank ordering was by effective date you are getting all fields of the record with the highest effective date for each foo and user.
DATSET
1 1 1 01/01/2001
2 1 1 01/01/2002
3 1 1 01/01/2003
4 1 2 01/01/2001
5 2 1 01/01/2001
DATSET WITH RANK ORDER PARTITIONED BY FOO_ID, USER_ID ORDERED BY DATE DESC
1 3 1 1 01/01/2001
2 2 1 1 01/01/2002
3 1 1 1 01/01/2003
4 1 1 2 01/01/2001
5 1 2 1 01/01/2001
SELECT * FROM QUERY ABOVE WHERE RANK_ORDER=1
3 1 1 1 01/01/2003
4 1 1 2 01/01/2001
5 1 2 1 01/01/2001

How to increment a value in SQL based on a unique key

Apologies in advance if some of the trigger solutions already cover this but I can't get them to work for my scenario.
I have a table of over 50,000 rows, all of which have an ID, with roughly 5000 distinct ID values. There could be 100 rows with an instrumentID = 1 and 50 with an instrumentID = 2 within the table etc but they will have slightly different column entries. So I could write a
SELECT * from tbl WHERE instrumentID = 1
and have it return 100 rows (I know this is easy stuff but just to be clear)
What I need to do is form an incrementing value for each time a instrument ID is found, so I've tried stuff like this:
IntIndex INT IDENTITY(1,1),
dDateStart DATE,
IntInstrumentID INT,
IntIndex1 AS IntInstrumentID + IntIndex,
at the table create step.
However, I need the IntIndex1 to increment when an instrumentID is found, irrespective of where the record is found in the table so that it effectively would provide a count of the records just by looking at the last IntIndex1 value alone. Rather than what the above does which is increment on all of the rows of the table irrespective of the instrumentID so you would get 5001,4002,4003 etc.
An example would be: for intInstruments 5000 and 4000
intInstrumentID | IntIndex1
--------- ------------------
5000 | 5001
5000 | 5002
4000 | 4001
5000 | 5003
4000 | 4002
The reason I need to do this is because I need to join two tables based on these values (a start and end date for each instrumentID). I have tried GROUP BY etc but this can't work in both tables and the JOIN then doesn't work.
Many thanks
I'm not entirely sure I understand your problem, but if you just need IntIndex1 to join to, could you just join to the following query, rather than trying to actually keep the calculated value in the database:
SELECT *,
intInstrumentID + RANK() OVER(PARTITION BY intInstrumentID ORDER BY dDateStart ASC) AS IntIndex1
FROM tbl
Edit: If I understand your comment correctly (which is not certain!), then presumably, you know that your end date and start date tables have the exact same number of rows, which leads to a one to one mapping between them based on thir respective end dates within instrument id?
If that's the case then maybe this join is what you are looking for:
SELECT SD.intInstrumentID, SD.dDateStart, ED.dEndDate
FROM
(
SELECT intInstrumentID,
dStartDate,
RANK() OVER(PARTITION BY intInstrumentID ORDER BY dDateStart ASC) AS IntIndex1
FROM tblStartDate
) SD
JOIN
(
SELECT intInstrumentID,
dEndDate,
RANK() OVER(PARTITION BY intInstrumentID ORDER BY dEndDate ASC) AS IntIndex1
FROM tblStartDate
) ED
ON SD.intInstrumentID = ED.intInstrumentID
AND SD.IntIndex1 = ED.IntIndex1
If not, please will you post some example data for both tables and the expected results?

Update rows in table

I have a table (Fruits) with following column
Fruit_Name(varchar2(10)) | IsDuplicate Number(1)
Mango 0
Orange 0
Mango 0
What i have to do is to update IsDuplicate column to 1 where Fruit_Name in Distinct i.e
Fruit_Name(varchar2(10)) | IsDuplicate Number(1)
Mango 1
Orange 1
Mango 0
How should I do this?
This should do it as far as I can tell
update fruits
set is_duplicate =
(
select case
when dupe_count > 1 and row_num = 1 then 1
else 0
end as is_dupe
from (
select f2.fruit_name,
count(*) over (partition by f2.fruit_name) as dupe_count,
row_number() over (partition by f2.fruit_name order by f2.fruit_name) as row_num,
rowid as row_id
from fruits f2
) ft
where ft.row_id = fruits.rowid
and ft.fruit_name = fruits.fruit_name
)
Edit
But instead of actually updating the table, why don't you create a view that returns the information. Depending on the size of the table it might be more efficient.
create view fruit_dupe_view
as
select fruit_name,
case
when dupe_count > 1 and row_num = 1 then 1
else 0
end as is_duplicate
from (
select fruit_name,
count(*) over (partition by fruit_name) as dupe_count,
row_number() over (partition by fruit_name order by fruit_name) as row_num
from fruits
) ft
Straight and simple -- you can't. Not with vanilla SQL. SQL is a set-based processing language, and you do things in sets. There is no way for SQL to know which one of your many Mango's should be tagged 1. You can probably tag one of them with 1 using windowing functions or ROWNUM etc. in a SELECT, but I don't think it can be done with an UPDATE.
In other words, your table lacks a unique key in the first place, so it is not something that SQL is designed to process.
However, you may try adding a sequential primary key to each row. Then you can easily write an UPDATE query to set to 1 all the rows with COUNT > 1 and key = MIN(key).
In other words, you really have to look at your database design. Relational databases are not supposed to contain "duplicates". That fact that you need to mark something as a duplicate means that your tables are designed wrong in the first place. The database should not even allow duplications to enter into its data.