constraint c for table t does not exist on PostgreSQL even though it's there - sql

I'm trying to run an INSERT query on TablePlus.
INSERT INTO minutes_clone (date, ticker, "lastTime", "openTime", date_time, group_type, "totalVolume", "totalPrice", "totalTrades")
VALUES('2021-07-02', 'YELP', '00:15:00', '00:00:00', '2021-07-02 00:00:00', 15, 0, 0, 0)
ON CONFLICT ON CONSTRAINT minutes_clone_stick_tickers_unique
DO UPDATE SET "lastTime" = '00:15:00', "openTime" = '00:00:00', date_time = '2021-07-02 00:00:00', "totalVolume" = 0, "totalPrice" = 0, "totalTrades" = 0
RETURNING id;
Instead of the query sending a success message, I'm getting an
ERROR: constraint "minutes_clone_stick_tickers_unique" for table "minutes_clone" does not exist.
Here is an image of my table structure.
to replicate:
CREATE TABLE "public"."minutes_clone" (
"ticker" varchar NOT NULL,
"totalTrades" int4 NOT NULL,
"totalPrice" numeric NOT NULL,
"totalVolume" int4,
"lastTime" time NOT NULL,
"openTime" time NOT NULL,
"date" date NOT NULL,
"group_type" int4 NOT NULL DEFAULT 1,
"date_time" timestamp,
"parent_id" int4,
"id" int4 NOT NULL DEFAULT nextval('id_seq'::regclass),
PRIMARY KEY ("id")
);
CREATE INDEX "minutes_clone_ticker_group_date_index" ON "public"."minutes_clone" USING BTREE ("ticker","group_type","date_time");
CREATE UNIQUE INDEX "minutes_clone_stick_tickers_unique" ON "public"."minutes_clone" USING BTREE ("date","ticker","openTime","group_type");
CREATE INDEX "minutes_clone_date_time_index" ON "public"."minutes_clone" USING BTREE ("date_time");
I've tried many things, like removing ON CONSTRAINT and dropping and re-adding the constraing but haven't been able to solve this issue. Any solutions?

You are mixing up indexes and constraints. That is understandable, because a unique constraint is always implemented by a unique index, but they are still not the same.
To make your statement work, you need a unique constraint on top of the index you currently have. You can create that with:
ALTER TABLE public.minutes_clone
ADD UNIQUE USING INDEX minutes_clone_stick_tickers_unique;

Related

Large SQL Request optimization for Faces Euclidean Distances calculations

I am calculating Euclidean distance between faces and want to store results in a table.
Current setup :
Each face is stored in Objects table and Distances between faces is stored in Faces_distances table.
The object table has the following columns objects_id, face_encodings, description
The faces_distances table has the following columns face_from, face_to, distance
In my my data set I have around 22 231 face objects which result in 494 217 361 pairs of faces - Although I understand it could be divided by 2 because
distance(face_from, face_to) = distance(face_to, face_from)
The database is Postgres 12.
The request below enables to insert the pairs of faces (without performing the distance calculation) that have not been calculated yet, but the execution time is very very very long (started 4 Days ago and still not done). Is there a way to optimize it ?
'''
-- public.objects definition
-- Drop table
-- DROP TABLE public.objects;
CREATE TABLE public.objects
(
objects_id int4 NOT NULL DEFAULT
nextval('objects_in_image_objects_id_seq'::regclass),
filefullname varchar(2303) NULL,
bbox varchar(255) NULL,
description varchar(255) NULL,
confidence numeric NULL,
analyzer varchar(255) NOT NULL DEFAULT 'object_detector'::character
varying,
analyzer_version int4 NOT NULL DEFAULT 100,
x int4 NULL,
y int4 NULL,
w int4 NULL,
h int4 NULL,
image_id int4 NULL,
derived_from_object int4 NULL,
object_image_filename varchar(2023) NULL,
face_encodings _float8 NULL,
face_id int4 NULL,
face_id_iteration int4 NULL,
text_found varchar NULL COLLATE "C.UTF-8",
CONSTRAINT objects_in_image_pkey PRIMARY KEY (objects_id),
CONSTRAINT objects_in_images FOREIGN KEY (objects_id) REFERENCES
public.objects(objects_id)
);
CREATE TABLE public.face_distances
(
face_from int8 NOT NULL,
face_to int8 NOT NULL,
distance float8 NULL,
CONSTRAINT face_distances_pk PRIMARY KEY (face_from, face_to)
);
-- public.face_distances foreign keys
ALTER TABLE public.face_distances ADD CONSTRAINT face_distances_fk
FOREIGN KEY (face_from) REFERENCES public.objects(objects_id);
ALTER TABLE public.face_distances ADD CONSTRAINT face_distances_fk_1
FOREIGN KEY (face_to) REFERENCES public.objects(objects_id);
Indexes
CREATE UNIQUE INDEX objects_in_image_pkey ON public.objects USING btree (objects_id);
CREATE INDEX objects_description_column ON public.objects USING btree (description);
CREATE UNIQUE INDEX face_distances_pk ON public.face_distances USING btree (face_from, face_to);
Query to add all pair of faces that are not already in the table.
insert into face_distances (face_from,face_to)
select t1.face_from , t1.face_to
from (
select f_from.objects_id face_from,
f_from.face_encodings face_from_encodings,
f_to.objects_id face_to,
f_to.face_encodings face_to_encodings
from objects f_from,
objects f_to
where f_from.description = 'face'
and f_to.description = 'face' ) as t1
left join face_distances on (
t1.face_from= face_distances.face_from
and t1.face_to = face_distances.face_to )
where face_distances.face_from is null;
try this simplified query.
It took only 5 minutes on my apple M1, SQLServer, with 22231 objects 'face', generated 247.097.565 pairs, which is excatly C(22231,2) number. The syntax is compatible with postgressql.
optimizations: join instead of the old jointure way, ranking functions to remove duplicates permutations (A,B)=(B,A),
removed the last [left join face_distance]: an empty table to recompute is a lot faster than checking for existance as an index search key lookup would be initiated for each key pair
insert into face_distances (face_from,face_to)
select f1,f2
from(
select --only needed fields here as this will fill temporary tables
f1.objects_id f1
,f2.objects_id f2
,dense_rank()over(order by f1.objects_id) rank1
,rank()over(partition by f2.objects_id order by f1.objects_id) rank2
from objects f1
-- generates all permutations
join objects f2 on f2.objects_id <> f1.objects_id and f2.description = 'face'
where f1.description = 'face'
)a
where rank2 >= rank1 --removes duplicate permutations

Upsert (merge) for updating record if it exists and inserting otherwise

I am trying to write a DB2 query that allows me to either update a record if it already exists but if it does not exist it should be inserted. I wrote the following query that should accomplish this:
MERGE INTO OA1P.TLZ712A1 AS PC
USING (
SELECT * FROM OA1P.TLZ712A1
WHERE CALENDAR_ID=13 AND
"PACKAGE"='M2108'
) PC2
ON (PC.ID_PACKAGE_CALENDAR=PC2.ID_PACKAGE_CALENDAR)
WHEN MATCHED THEN
UPDATE SET ACT_DATE = '31.12.2021'
WHEN NOT MATCHED THEN
INSERT ("PACKAGE", ACT_DATE, CALENDAR_ID, PREPTA, MIXED) VALUES ('M2108', '31.12.2021', 13, 0, 0)
This query should attempt to check if a record already exists for the selection criteria. Updating a record seems to be working fine but I am not able to get the "WHEN NOT MATCHED" part to work and inserting a new record. Anyone able to provide some assistance?
The table is used to save the activation date of a certain software package. PACKAGE is the reference to the package table containing the name of the package (eg. "M2108"). CALENDAR_ID refers to a system where the software package will be activated. The actual date is stored in ACT_DATE.
Did not manage to get the DDL into SQLFiddle so I have to provide it here:
CREATE TABLE OA1P.TLZ712A1 (
ID_PACKAGE_CALENDAR INTEGER DEFAULT IDENTITY GENERATED BY DEFAULT NOT NULL,
CALENDAR_ID INTEGER,
"PACKAGE" VARCHAR(10) NOT NULL,
ACT_DATE DATE NOT NULL,
PREPTA SMALLINT DEFAULT 0 NOT NULL,
MIXED SMALLINT DEFAULT 0 NOT NULL,
"COMMENT" VARCHAR(60) NOT NULL,
LAST_MODIFIED_PID CHAR(7) NOT NULL,
ST_STARTID TIMESTAMP NOT NULL,
ST_FROM TIMESTAMP NOT NULL,
ST_TO TIMESTAMP NOT NULL,
CONSTRAINT TLZ712A1_PK PRIMARY KEY (ID_PACKAGE_CALENDAR),
CONSTRAINT CALENDAR FOREIGN KEY (CALENDAR_ID) REFERENCES OA1P.TLZ711A1(ID_CALENDAR) ON DELETE RESTRICT,
CONSTRAINT "PACKAGE" FOREIGN KEY ("PACKAGE") REFERENCES OA1P.TLZ716A1(NAME) ON DELETE RESTRICT
);
CREATE UNIQUE INDEX ILZ712A0 ON OA1P.TLZ712A1 (ID_PACKAGE_CALENDAR);
If your goal is to set ACT_DATE to 31.12.2021 if a row is found with PACKAGE = M2108 and CALENDAR_ID = 13 and if no row is found with these values then insert it, then this could be the answer
MERGE INTO OA1P.TLZ712A1 AS PC
USING (
VALUES ('M2108', 13, date '31.12.2021')
) PC2 ("PACKAGE", CALENDAR_ID, ACT_DATE)
ON (PC."PACKAGE", PC.CALENDAR_ID) = (PC2."PACKAGE", PC2.CALENDAR_ID)
WHEN MATCHED THEN
UPDATE SET ACT_DATE = PC2.ACT_DATE
WHEN NOT MATCHED THEN
INSERT ("PACKAGE", ACT_DATE, CALENDAR_ID, PREPTA, MIXED) VALUES (PC2."PACKAGE", PC2.ACT_DATE, PC2.CALENDAR_ID, 0, 0)

Can't serialize transient record type postgres

I am trying to make my calculation dynamic based on certain criteria as below, but when I try to send the fields dynamically in to my calculation logic, it fails with the error " Can't serialize transient record type":
Create table statement :
create table calculation_t(
Id serial,
product_id integer not null,
metric_id integer not null,
start_date date,
end_date date,
calculation_logic varchar(50),
insert_timestamp timestamp default current_timestamp,
CONSTRAINT calculation_pk PRIMARY KEY(Id),
CONSTRAINT calculation_pid_fk FOREIGN KEY(product_id) REFERENCES Product_T(Product_id),
CONSTRAINT calc_mid_fk FOREIGN KEY(metric_id) REFERENCES metric_T(metric_id)
);
Insert statement :
insert into calculation_t(product_id,metric_id,calculation_logic)
select a.product_id,b.metric_id,
(case when b.metric_id=2 then
('$1-$2') else
'$1/$2' end) calc
from product_t a,metric_t b
Select statement which throws the mentioned error :
select *,(1,2,calculation_logic) from calculation_t
Note : I am using Greenplum database.
Try to remove parenthesis form your query:
select *,1,2,calculation_logic from calculation_t
It worked for me.
Thanx,

Ensuring uniqueness of multiple large URL fields in MS SQL

I have a table with the following definition:
CREATE TABLE url_tracker (
id int not null identity(1, 1),
active bit not null,
install_date int not null,
partner_url nvarchar(512) not null,
local_url nvarchar(512) not null,
public_url nvarchar(512) not null,
primary key(id)
);
And I have a requirement that these three URLs always be unique - any individual URL can appear many times, but the combination of the three must be unique (for a given day).
Initially I thought I could do this:
CREATE UNIQUE INDEX uniques ON url_tracker
(install_date, partner_url, local_url, public_url);
However this gives me back the warning:
Warning! The maximum key length is 900 bytes. The index 'uniques' has maximum
length of 3076 bytes. For some combination of large values, the insert/update
operation will fail.
Digging around I learned about the INCLUDE argument to CREATE INDEX, but according to this question converting the command to use INCLUDE will not enforce uniqueness on the URLs.
CREATE UNIQUE INDEX uniques ON url_tracker (install_date)
INCLUDE (partner_url, local_url, public_url);
How can I enforce uniqueness on several relatively large nvarchar fields?
Resolution
So from the comments and answers and more research I'm concluding I can do this:
CREATE TABLE url_tracker (
id int not null identity(1, 1),
active bit not null,
install_date int not null,
partner_url nvarchar(512) not null,
local_url nvarchar(512) not null,
public_url nvarchar(512) not null,
uniquehash AS HashBytes('SHA1',partner_url+local_url+public_url) PERSISTED,
primary key(id)
);
CREATE UNIQUE INDEX uniques ON url_tracker (install_date,uniquehash);
Thoughts?
I would make a computed column with the hash of the URLs, then make a unique index/constraint on that. Consider making the hash a persisted computed column. It shouldn't have to be recalculated after insertion.
Following the ideas from the conversation in the comments. Assuming that you can change the datatype of the URL to be VARCHAR(900) (or NVARCHAR(450) if you really think you need Unicode URLs) and be happy with the limitation on the length of the URL, this solution could work. This also assumes SQL Server 2008 or better. Please, always specify what version you're working with; sql-server is not specific enough, since solutions can vary greatly depending on the version.
Setup:
USE tempdb;
GO
CREATE TABLE dbo.urls
(
id INT IDENTITY(1,1) PRIMARY KEY,
url VARCHAR(900) NOT NULL UNIQUE
);
CREATE TABLE dbo.url_tracker
(
id INT IDENTITY(1,1) PRIMARY KEY,
active BIT NOT NULL DEFAULT 1,
install_date DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
partner_url_id INT NOT NULL REFERENCES dbo.urls(id),
local_url_id INT NOT NULL REFERENCES dbo.urls(id),
public_url_id INT NOT NULL REFERENCES dbo.urls(id),
CONSTRAINT unique_urls UNIQUE
(
install_date,partner_url_id, local_url_id, public_url_id
)
);
Insert some URLs:
INSERT dbo.urls(url) VALUES
('http://msn.com/'),
('http://aol.com/'),
('http://yahoo.com/'),
('http://google.com/'),
('http://gmail.com/'),
('http://stackoverflow.com/');
Now let's insert some data:
-- succeeds:
INSERT dbo.url_tracker(partner_url_id, local_url_id, public_url_id)
VALUES (1,2,3), (2,3,4), (3,4,5), (4,5,6);
-- fails:
INSERT dbo.url_tracker(partner_url_id, local_url_id, public_url_id)
VALUES(1,2,3);
GO
/*
Msg 2627, Level 14, State 1, Line 3
Violation of UNIQUE KEY constraint 'unique_urls'. Cannot insert duplicate key
in object 'dbo.url_tracker'. The duplicate key value is (2011-09-15, 1, 2, 3).
The statement has been terminated.
*/
-- succeeds, since it's for a different day:
INSERT dbo.url_tracker(install_date, partner_url_id, local_url_id, public_url_id)
VALUES('2011-09-01',1,2,3);
Cleanup:
DROP TABLE dbo.url_tracker, dbo.urls;
Now, if 900 bytes is not enough, you could change the URL table slightly:
CREATE TABLE dbo.urls
(
id INT IDENTITY(1,1) PRIMARY KEY,
url VARCHAR(2048) NOT NULL,
url_hash AS CONVERT(VARBINARY(32), HASHBYTES('SHA1', url)) PERSISTED,
CONSTRAINT unique_url UNIQUE(url_hash)
);
The rest doesn't have to change. And if you try to insert the same URL twice, you get a similar violation, e.g.
INSERT dbo.urls(url) SELECT 'http://www.google.com/';
GO
INSERT dbo.urls(url) SELECT 'http://www.google.com/';
GO
/*
Msg 2627, Level 14, State 1, Line 1
Violation of UNIQUE KEY constraint 'unique_url'. Cannot insert duplicate key
in object 'dbo.urls'. The duplicate key value is
(0xd111175e022c19f447895ad6b72ff259552d1b38).
The statement has been terminated.
*/

MySQL unique clustered constraint not constraining as expected

I'm creating a table with:
CREATE TABLE movies
(
id INT AUTO_INCREMENT PRIMARY KEY,
name CHAR(255) NOT NULL,
year INT NOT NULL,
inyear CHAR(10),
CONSTRAINT UNIQUE CLUSTERED (name, year, inyear)
);
(this is jdbc SQL)
Which creates a MySQL table with a clustered index, "index kind" is "unique", and spans the three clustered columns:
mysql screen http://img510.imageshack.us/img510/930/mysqlscreenshot.th.jpg
full size
However, once I dump my data (without exceptions thrown), I see that the uniqueness constraint has failed:
SELECT * FROM movies
WHERE name = 'Flawless' AND year = 2007 AND inyear IS NULL;
gives:
id, name, year, inyear
162169, 'Flawless', 2007, NULL
162170, 'Flawless', 2007, NULL
Does anyone know what I'm doing wrong here?
MySQL does not consider NULL values as equal; hence, why the unique constraint appears to not be working. To get around this, you can add a computed column to the table which is defined as:
nullCatch as (case when inyear is null then '-1' else inyear)
Substitute this column in for 'inyear' in the constraint:
CONSTRAINT UNIQUE CLUSTERED (name, year, nullCatch)