Validating json string using CHECK constraint in Postgres (sql) - sql

I have a table with below schema :
CREATE TABLE tbl_name (
id bigserial primary key,
phone_info json
);
Sample json data for phone_info column is given below .
{
"STATUS":{"1010101010":"1","2020202020":"1"},
"1010101010":"OK",
"2020202020":"OK"
}
Now I need to add a check constraint on phone_info column so that all key for "STATUS" ie(1010101010,2020202020) should exist as a (key,value) pair of phone_info column where value would be "OK".
So above sample data would satisfy the check constraint as there are following key value pair exists in phone_info column.
"1010101010":"OK"
"2020202020,":"OK"
I have tried below solution but this has not worked because array_agg function is not supported with check constraints.
ALTER TABLE tbl_name
ADD CONSTRAINT validate_info CHECK ('OK' = ALL(array_agg(phone_info->json_object_keys(phone_info->'STATUS'))) );
Can someone please help me out , Can I write a SQL function and use the function in check constraint?

With something like this I think you'll want an SQL function.
CREATE TABLE tjson AS SELECT '{
"STATUS":{"1010101010":"1","2020202020":"1"},
"1010101010":"OK",
"2020202020":"OK"
}'::json AS col;
perhaps something like:
CREATE OR REPLACE FUNCTION my_json_valid(json) RETURNS boolean AS $$
SELECT bool_and(coalesce($1->>k = 'OK','f'))
FROM json_object_keys($1->'STATUS') k;
$$ LANGUAGE sql IMMUTABLE;
... but remember that while PostgreSQL will let you modify that function, doing so can cause previously valid rows to become invalid in the table. Never modify this function without dropping the constraint then adding it back again.

Related

Why is the column not altering when I try to convert it to UUID?

I have a primary key column in my SQL table in PostgreSQL named "id". It is a "bigseries" column. I want to convert the column to a "UUID" column. It entered the below command in the terminal:
alter table people alter column id uuid;
and
alter table people alter column id uuid using (uuid_generate_v4());
but neither of them worked.
In both tries I got the error message
ERROR: syntax error at or near "uuid"
LINE 1: alter table people alter column id uuid using (uuid_generate...
What is the correct syntax?
First of all uuid_generate_v4() is a function which is provided by an extension called uuid-ossp. You should have install that extension by using;
CREATE EXTENSION uuid-ossp;
Postgresql 13 introduced a new function which does basically the same without installing extension. The function is called gen_random_uuid()
Suppose that we have a table like the one below;
CREATE TABLE people (
id bigserial primary key,
data text
);
The bigserial is not a real type. It's a macro which basically creates bigint column with default value and a sequence. The default value is next value of that sequence.
For your use case, to change data type, you first should drop the old default value. Then, alter the type and finally add new default value expression. Here is the sample:
ALTER TABLE people
ALTER id DROP DEFAULT,
ALTER id TYPE uuid using (gen_random_uuid() /* or uuid_generate_v4() */ ),
ALTER id SET DEFAULT gen_random_uuid() /* or uuid_generate_v4() */ ;
CREATE TABLE IF NOT EXISTS people (
id uuid NOT NULL CONSTRAINT people_pkey PRIMARY KEY,
address varchar,
city varchar(255),
country varchar(255),
email varchar(255),
phone varchar(255)
);
This is the correct syntax to create table in postgres SQL, it's better to do these constraints at beginning to avoid any error.
For using alter command you would do the following:
ALTER TABLE customer ADD COLUMN cid uuid PRIMARY KEY;
Most of errors that you could find while writing command either lower case or undefined correct the table name or column.

CREATE with DEFAULT based on first id of another table

An, i hope, simple question i sadly can't find the answer to trough googling or RTFMing.
I want to create a column with a default value that is based in the first id of another table.
Something like that, which sadly gives me "ERROR: cannot use subquery in default expression"
ALTER TABLE foobar
ADD COLUMN foo INTEGER DEFAULT (SELECT id FROM blubb LIMIT 1);
The problem is, I can not simply assume that 'blubb' starts at 0 or 1 and I want to put a CONSTRAINT on it later on.
Simple answer: Use a function.
Example:
CREATE TABLE foo (id serial primary key);
CREATE OR REPLACE FUNCTION max_foo() RETURNS int LANGUAGE SQL AS
$$ SELECT max(id) FROM foo; $$;
CREATE TABLE bar(id int not null default max_foo());

SQL unique index without leading zeros

I have set-up a table using the following SQL script:
CREATE TABLE MY_TABLE (
ID NUMBER NOT NULL,
CODE VARCHAR2(40) NOT NULL,
CONSTRAINT MY_TABLE PRIMARY KEY (ID)
);
CREATE UNIQUE INDEX XUNIQUE_MY_TABLE_CODE ON MY_TABLE (CODE);
The problem is that I need to ensure that CODE does not have a leading zero for its value.
How do I accomplish this in SQL so that a 40-char value without a leading zero is stored?
CODE VARCHAR2 NOT NULL CHECK (VALUE not like '0%')
sorry - slight misread on the original spec
If you can guarantee that all INSERTs and UPDATEs to this table are done through a stored procedure, you could put some code there to check that the data is valid and return an error if not.
P.S. A CHECK CONSTRAINT would be better, except that MySQL doesn't support them.

Replace into equivalent for postgresql and then autoincrementing an int

Okay no seriously, if a PostgreSQL guru can help out I'm just getting started.
Basically what I want is a simple table like such:
CREATE TABLE schema.searches
(
search_id serial NOT NULL,
search_query character varying(255),
search_count integer DEFAULT 1,
CONSTRAINT pkey_search_id PRIMARY KEY (search_id)
)
WITH (
OIDS=FALSE
);
I need something like REPLACE INTO for MySQL. I don't know if I have to write my own procedure or something?
Basically:
check if the query already exists
if so, just add 1 to the count
it not, add it to the db
I can do this in my php code but I'd rather all that be done in postgres C engine
You have to add a unique constraint first.
ALTER TABLE schema.searches ADD UNIQUE (search_query);
The insert/replace command looks like this.
INSERT INTO schema.searches(search_query) VALUES ('a search query')
ON CONFLICT (search_query)
DO UPDATE SET search_count = schema.searches.search_count + 1;

how to generate primary key values while inserting data into table through pl/sql stored procedure

I need to insert data into particular table through pl/sql stored procedure. My requirements are:
while inserting it should generate PRIMARY KEY values for a particular column;
it should return that PRIMARY KEY value to an output variable; and
for another column it should validate my string such that it should contain only characters, not integers.
You can generate primary key values as a surrogate key using an Oracle SEQUENCE. You can create a constraint on a column that uses TRANSLATE to check that no numeric digits exist in newly inserted/updated data.
Some example code, suitable for SQL*Plus:
CREATE SEQUENCE mysequence;
/
CREATE TABLE mytable (
pkidcol NUMBER PRIMARY KEY,
stringcol VARCHAR2(100)
);
/
ALTER TABLE mytable ADD (
CONSTRAINT stringnonnumeric
CHECK (stringcol = TRANSLATE(stringcol,'A0123456789','A'))
);
/
DECLARE
mystring mytable.stringcol%TYPE := 'Hello World';
myid mytable.pkidcol%TYPE;
BEGIN
INSERT INTO mytable (pkidcol, stringcol)
VALUES (mysequence.NEXTVAL, mystring)
RETURNING pkidcol INTO myid;
END;
/
In oracle I believe the "identity" column is best achieved with a sequence and an insert trigger that checks if the primary key columns is null and if so gets the next sequence and inserts it.
you can then use the "returning" clause to get the newly created primary key:
insert into <table> (<columns>) values (<values>) returning <prim_key> into <variable>;
the filtering of the string field I would personally handle in code before going to the database (if that is a possibility). Databases are notoriously inefficient at handling string operations.