Why is Sequelize upsert not working with composite unique key? - sql

I use this table in a PostgreSQL database:
create table if not exists "Service" (
_id uuid not null primary key,
service text not null,
"count" integer not null,
"date" timestamp with time zone,
team uuid,
organisation uuid,
"createdAt" timestamp with time zone not null,
"updatedAt" timestamp with time zone not null,
unique (service, "date", organisation),
foreign key ("team") references "Team"("_id"),
foreign key ("organisation") references "Organisation"("_id")
);
When I try an upsert with Sequelize with the following code, it throws an error:
Service.upsert({ team, date, service, organisation, count }, { returning: true })
Error is:
error: duplicate key value violates unique constraint "Service_service_date_organisation_key"
Key (service, date, organisation)= (xxx, 2022-12-30 01:00:00+01, 12345678-5f63-1bc6-3924-517713f97cc3) already exists.
But according to Sequelize documentation it should work: https://sequelize.org/docs/v6/other-topics/upgrade/#modelupsert
Note for Postgres users: If upsert payload contains PK field, then PK will be used as the conflict target. Otherwise first unique constraint will be selected as the conflict key.
How can I find this duplicate key error and get it work with the composite unique key: unique (service, "date", organisation)?

It looks like your problem is related to issue #13240.
If you're on Sequelize 6.12 or above, you should be able to use an explicit list of conflictFields:
Service.upsert(
{ team, date, service, organisation, count },
{ conflictFields: ["service", "date", "organisation"] },
{ returning: true }
)

References
Similar questions were asked on GitHub, see:
https://github.com/sequelize/sequelize/issues/13240
https://github.com/sequelize/sequelize/issues/13412
and they were not solved so far, so, as the time of this writing, this issue seems to be unresolved, so you will need to work-around it. Below I will provide a few ideas to solve this, but since I have never worked with Sequelize, it is possible that I have some syntax error or some misunderstanding. If so, please point it out and I'll fix it.
Approach 1: Querying by your unique key and inserting/updating by it
Post.findAll({
where: {
service: yourservice,
date: yourdate,
organization: yourorganization
}
});
And then insert if the result is empty, update otherwise.
Approach 2: Modifying your schema
Since your composite unique key is a candidate key, an option would be to remove your _id field and make your (service, "date", organization) unique.
Approach 3: Implement an insert trigger on your table
You could simply call insert from Sequelize and let a PostgreSQL trigger handle the upserting, see: How to write an upsert trigger in PostgreSQL?
Example trigger:
CREATE OR REPLACE FUNCTION on_before_insert_versions() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF pg_trigger_depth() = 1 THEN
INSERT INTO versions (key, version) VALUES (NEW.key, NEW.version)
ON CONFLICT (key)
DO UPDATE SET version = NEW.version;
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END;$$;
You of course will need to change table and field names accordingly to your schema and command.

Related

SQL - How do you use a user defined function to constrain a value between 2 tables

First here's the relevant code:
create table customer(
customer_mail_address varchar(255) not null,
subscription_start date not null,
subscription_end date, check (subscription_end !< subcription start)
constraint pk_customer primary key (customer_mail_address)
)
create table watchhistory(
customer_mail_address varchar(255) not null,
watch_date date not null,
constraint pk_watchhistory primary key (movie_id, customer_mail_address, watch_date)
)
alter table watchhistory
add constraint fk_watchhistory_ref_customer foreign key (customer_mail_address)
references customer (customer_mail_address)
on update cascade
on delete no action
go
So i want to use a UDF to constrain the watch_date in watchhistory between the subscription_start and subscription_end in customer. I can't seem to figure it out.
Check constraints can't validate data against other tables, the docs say (emphasis mine):
[ CONSTRAINT constraint_name ]
{
...
CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}
logical_expression
Is a logical expression used in a CHECK constraint and returns TRUE or
FALSE. logical_expression used with CHECK constraints cannot
reference another table but can reference other columns in the same
table for the same row. The expression cannot reference an alias data
type.
That being said, you can create a scalar function that validates your date, and use the scalar function on the check condition instead:
CREATE FUNCTION dbo.ufnValidateWatchDate (
#WatchDate DATE,
#CustomerMailAddress VARCHAR(255))
RETURNS BIT
AS
BEGIN
IF EXISTS (
SELECT
'supplied watch date is between subscription start and end'
FROM
customer AS C
WHERE
C.customer_mail_address = #CustomerMailAddress AND
#WatchDate BETWEEN C.subscription_start AND C.subscription_end)
BEGIN
RETURN 1
END
RETURN 0
END
Now add your check constraint so it validates that the result of the function is 1:
ALTER TABLE watchhistory
ADD CONSTRAINT CHK_watchhistory_ValidWatchDate
CHECK (dbo.ufnValidateWatchDate(watch_date, customer_mail_address) = 1)
This is not a direct link to the other table, but a workaround you can do to validate the date. Keep in mind that if you update the customer dates after the watchdate insert, dates will be inconsistent. The only way to ensure full consistency in this case would be with a few triggers.

Validating json string using CHECK constraint in Postgres (sql)

I have a table with below schema :
CREATE TABLE tbl_name (
id bigserial primary key,
phone_info json
);
Sample json data for phone_info column is given below .
{
"STATUS":{"1010101010":"1","2020202020":"1"},
"1010101010":"OK",
"2020202020":"OK"
}
Now I need to add a check constraint on phone_info column so that all key for "STATUS" ie(1010101010,2020202020) should exist as a (key,value) pair of phone_info column where value would be "OK".
So above sample data would satisfy the check constraint as there are following key value pair exists in phone_info column.
"1010101010":"OK"
"2020202020,":"OK"
I have tried below solution but this has not worked because array_agg function is not supported with check constraints.
ALTER TABLE tbl_name
ADD CONSTRAINT validate_info CHECK ('OK' = ALL(array_agg(phone_info->json_object_keys(phone_info->'STATUS'))) );
Can someone please help me out , Can I write a SQL function and use the function in check constraint?
With something like this I think you'll want an SQL function.
CREATE TABLE tjson AS SELECT '{
"STATUS":{"1010101010":"1","2020202020":"1"},
"1010101010":"OK",
"2020202020":"OK"
}'::json AS col;
perhaps something like:
CREATE OR REPLACE FUNCTION my_json_valid(json) RETURNS boolean AS $$
SELECT bool_and(coalesce($1->>k = 'OK','f'))
FROM json_object_keys($1->'STATUS') k;
$$ LANGUAGE sql IMMUTABLE;
... but remember that while PostgreSQL will let you modify that function, doing so can cause previously valid rows to become invalid in the table. Never modify this function without dropping the constraint then adding it back again.

Adding an one-out-of-two not null constraint in postgresql

If I have a table in Postgresql:
create table Education (
id integer references Profiles(id),
finished YearValue not null,
started YearValue,
qualification text,
schoolName text,
studiedAt integer references Organizations(id),
primary key (id)
);
I need to make a constraint so that either schoolName or studiedAt needs to not be null (one of them has to have information in it).
How do I do this?
You can use a check constraint e.g.
constraint chk_education check (schoolName is not null or studiedAt is not null)
From the manual:
A check constraint is the most generic constraint type. It allows you to specify that the value in a certain column must satisfy a Boolean (truth-value) expression.
Edit: Alternative to comply with Pithyless' interpretation:
constraint chk_education check ((schoolName is not null and studiedAt is null) or (schoolName is null and studiedAt is not null))
You can also use a trigger on update and insert to check that a rule is followed before allowing the data into the table. You would normally use this type of approach when the check constraint needs more complicated logic.
This is my solution for sequelize migration file in "up" function
queryInterface.addConstraint('Education', {
fields: ['schoolName', 'studiedAt'],
type: 'check',
name: 'schoolName_or_studiedAt_is_null',
where: { [Sequelize.Op.or]: [{ password: null }, { googleId: null }] },
}),

SQL unique index without leading zeros

I have set-up a table using the following SQL script:
CREATE TABLE MY_TABLE (
ID NUMBER NOT NULL,
CODE VARCHAR2(40) NOT NULL,
CONSTRAINT MY_TABLE PRIMARY KEY (ID)
);
CREATE UNIQUE INDEX XUNIQUE_MY_TABLE_CODE ON MY_TABLE (CODE);
The problem is that I need to ensure that CODE does not have a leading zero for its value.
How do I accomplish this in SQL so that a 40-char value without a leading zero is stored?
CODE VARCHAR2 NOT NULL CHECK (VALUE not like '0%')
sorry - slight misread on the original spec
If you can guarantee that all INSERTs and UPDATEs to this table are done through a stored procedure, you could put some code there to check that the data is valid and return an error if not.
P.S. A CHECK CONSTRAINT would be better, except that MySQL doesn't support them.

PostgreSQL - Error: SQL state: XX000

I have a table in Postgres that looks like this:
CREATE TABLE "Population"
(
"Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass),
"Name" character varying(255) NOT NULL,
"Description" character varying(1024),
"IsVisible" boolean NOT NULL
CONSTRAINT "pk_Population" PRIMARY KEY ("Id")
)
WITH (
OIDS=FALSE
);
And a select function that looks like this:
CREATE OR REPLACE FUNCTION "Population_SelectAll"()
RETURNS SETOF "Population" AS
$BODY$select
"Id",
"Name",
"Description",
"IsVisible"
from "Population";
$BODY$
LANGUAGE 'sql' STABLE
COST 100
Calling the select function returns all the rows in the table as expected.
I have a need to add a couple of columns to the table (both of which are foreign keys to other tables in the database). This gives me a new table def as follows:
CREATE TABLE "Population"
(
"Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass),
"Name" character varying(255) NOT NULL,
"Description" character varying(1024),
"IsVisible" boolean NOT NULL,
"DefaultSpeciesId" bigint NOT NULL,
"DefaultEcotypeId" bigint NOT NULL,
CONSTRAINT "pk_Population" PRIMARY KEY ("Id"),
CONSTRAINT "fk_Population_DefaultEcotypeId" FOREIGN KEY ("DefaultEcotypeId")
REFERENCES "Ecotype" ("Id") MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT "fk_Population_DefaultSpeciesId" FOREIGN KEY ("DefaultSpeciesId")
REFERENCES "Species" ("Id") MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
and function:
CREATE OR REPLACE FUNCTION "Population_SelectAll"()
RETURNS SETOF "Population" AS
$BODY$select
"Id",
"Name",
"Description",
"IsVisible",
"DefaultSpeciesId",
"DefaultEcotypeId"
from "Population";
$BODY$
LANGUAGE 'sql' STABLE
COST 100
ROWS 1000;
Calling the function after these changes results in the following error message:
ERROR: could not find attribute 11 in subquery targetlist
SQL state: XX000
What is causing this error and how do I fix it? I have tried to drop and recreate the columns and function - but the same error occurs.
Platform is PostgreSQL 8.4 running on Windows Server. Thanks.
Did you dropping and recreating the function?
By the way, you gotta love how user friendly Postgres is. What other database would you hugs and kisses(XXOOO) as an error state?
When I've seen something similar in the past, it was because the database connection cached certain function attributes. So if I was using pgAdmin, I had to close the SQL editor window and establish a new connection in order to get the function to work correctly. If you haven't already, be sure you are testing the function on new db connections.
I thought the issue was fixed a few versions ago in PostgreSQL, but it's worth a try.
Found a bit easier for me solution: created a backup of the database and restored it from this backup.